BLAZE 3.0 / NexGen PRD

← Presentation View ← Platform Overview

Blaze 3.0 / NexGen — Product Requirements Document

Document ID: PRD-BLAZE-3.0 Version: 1.0.0 Status: Draft Author: Platform Architecture Team Date: 2026-04-05 Classification: Internal — Product Strategy


Part 1: Vision & Strategy

1.1 Product Vision

Blaze is the agentic product development lifecycle platform powered by knowledge. It provides the complete infrastructure — cloud operations, knowledge management, process orchestration, AI governance, and developer tooling — that enables organizations to build, deploy, operate, and continuously improve AI-powered solutions at enterprise scale.

Every solution built on Blaze inherits the full platform: a 4-store knowledge base, BPMN process execution, compliance-driven development, AI governance controls, contextual micro-learning, and continuous feedback loops. Developers don't build these capabilities — they consume them. The platform gets smarter with every solution built on it.

1.2 Core Architecture

Blaze 3.0 is organized into five layers:

Layer 1: Blaze Cloud Operating Platform
  Multi-tenant provisioning, EKS cluster management, Cloudflare edge,
  DNS & auth, cost management & billing

Layer 2: Blaze Platform Services
  Knowledge Base (4-store), Camunda engine, evidence ingestion,
  RAG copilot, feedback service, auth framework, task queue,
  connector framework, data governance, GDPR, MCP, audit, encryption,
  WebSocket real-time, notification service, LLM gateway

Layer 3: Blaze PDLC (Product Development Lifecycle)
  Agentic SDLC orchestration, solution creation wizard, TDD/BDD/CDD,
  code generation pipeline, PR orchestration, agent analytics,
  micro-learning, continuous PDLC improvement

Layer 4: Blaze AI Governance Framework
  ISO 42001 AIMS, EU AI Act compliance, NIST AI RMF, 101-control catalog,
  13 BPMN governance processes, 5 DMN decision tables, evidence infrastructure,
  observability (OTel + Prometheus + Grafana), AI risk management

Layer 5: Applications (built ON Blaze, BY Blaze)
  KMFlow, Change/ACMOS, SLA, COBOL Migration, [future applications]

1.3 Platform vs Application Separation

Concern Platform (Blaze) Application (KMFlow, Change, SLA, etc.)
Knowledge Base 4-store stack + KB service + embedding + RAG Domain ontology, domain parsers, domain seed data
Process Execution Camunda engine, deployment, task management Domain BPMN processes, domain DMN tables
Auth & Tenancy JWT, RBAC, RLS, OTP/SSO, multi-tenant isolation Role definitions, persona configurations
AI Governance 101 controls, 13 BPMN processes, 5 DMN tables, evidence Domain-specific risk scenarios, domain policies
Ingestion Pipeline framework, generic parsers (PDF, Excel, CSV, images) Domain parsers (COBOL, ARIS, BPMN, contracts)
Feedback Feedback service, correction promotion, KB growth Domain-specific feedback categories
UI Design system, layout shell, chart library, LMS engine Domain pages, domain dashboards, domain components
Observability OTel, Prometheus, Grafana, Phoenix AI Domain-specific KRIs, domain dashboards
Connectors Connector framework, WAL, sync checkpointing Domain field mappings, domain-specific API calls
Development Agentic SDLC, code generation framework, PR review Domain fixtures, domain test scenarios

1.4 Strategic Differentiators

  1. Knowledge-first: The KB is not a feature — it is the foundation. Every solution starts with knowledge ingestion, and every interaction enriches the KB.
  2. Governance-as-code: AI governance is enforced by executable BPMN processes and DMN decision tables, not documentation. Controls produce machine-verifiable evidence.
  3. Agentic development: 78 specialized AI agents orchestrate the entire SDLC. The PDLC itself is continuously measured and improved.
  4. Platform inheritance: Build once in Blaze, inherit everywhere. A capability added to the platform is immediately available to all solutions.
  5. Full lifecycle: From solution conception through production operation through retirement, every phase is governed, instrumented, and knowledge-captured.

Part 2: Personas & Journeys

2.1 Platform Operator

Role: Provisions and manages the Blaze cloud infrastructure. Responsible for multi-tenant operations, cost management, cluster health, and platform upgrades.

Key touchpoints: Admin UI (Solutions view, Infrastructure view, Cost Report, Credentials, Platform Settings), K8s Console, Operations Dashboard

Journey: 1. Onboard new organization via admin wizard (industry, tier, integrations) 2. Provision tenant namespaces and configure auth (OTP/SSO) 3. Monitor cluster health, node utilization, PVC capacity 4. Manage cost — track spend by namespace, set budget alerts 5. Scale solutions up/down based on demand 6. Rotate credentials, manage LLM provider keys 7. Respond to platform alerts (pod crash-loops, PVC full, node issues) 8. Perform platform upgrades and Helm chart updates

Needs: Real-time infrastructure visibility, one-click tenant provisioning, cost attribution by solution/tenant, automated alerting with runbooks


2.2 Solution Architect

Role: Designs new solutions using the Blaze PDLC. Defines the solution's knowledge model, process workflows, personas, and integration points.

Key touchpoints: Solution Creation Wizard, BPMN Modeler, Ontology Designer, Knowledge Graph Explorer

Journey: 1. Start the solution creation wizard (BPMN-orchestrated process) 2. Define phase: Name the solution, select category, define personas, specify triggers, choose platform capabilities 3. Build phase: Design BPMN processes, define DMN decision tables, configure human tasks, set notification rules, define data schema 4. Select reusable functions from the platform capability catalog 5. Run the code generation pipeline against source artifacts 6. Verify generated artifacts (React components, FastAPI stubs, OpenAPI specs, Alembic migrations) 7. Hand off to developers for KB-integrated development 8. Review architecture via PR orchestration

Needs: Guided wizard experience, visual process design, capability catalog browsing, deterministic code generation, architecture review feedback


2.3 Developer

Role: Builds solutions in a Blaze PDLC tenant using agentic development. Works in a Docker container with Claude Code, consuming platform services and generated artifacts.

Key touchpoints: Terminal UI (workspace), CI/CD Command Center, Agent Analytics Dashboard, Micro-Learning prompts

Journey: 1. Access workspace via Cloudflare-protected terminal (OTP/SSO) 2. Receive agentic development context (work item, branch, SDLC phase) 3. Write tests first (TDD) from BDD scenarios generated in the define phase 4. Implement KB-integrated services that augment generated stubs 5. See agent analytics — which agents fired, what they produced, effectiveness scores 6. Receive contextual micro-learning prompts based on current task and skill level 7. Submit PR — 9+ review agents evaluate code quality, security, compliance, coverage 8. Evidence is automatically collected at every phase transition 9. Deploy via platform orchestration

Needs: Seamless workspace access, agentic assistance at every step, transparent agent activity, contextual learning, automated compliance evidence, fast feedback loops


2.4 End User

Role: Uses a deployed Blaze-powered solution (varies by application). For KMFlow: consultant. For Change: change practitioner. For SLA: governance analyst.

Key touchpoints: Solution-specific UI, RAG Copilot, Feedback Widget, Learning Hub

Journey: 1. Access solution via OTP/SSO-protected domain 2. Navigate solution-specific UI (dashboards, forms, workflows) 3. Interact with RAG Copilot for context-aware Q&A 4. Complete Camunda user tasks assigned to their persona/role 5. Provide feedback (thumbs up/down + corrections) on any interaction 6. Access micro-learning content relevant to their current activity 7. View knowledge graph for context about the domain 8. Receive notifications for task assignments, SLA warnings, and escalations

Needs: Intuitive domain-specific UI, AI-powered assistance, contextual help, transparent process status, ability to provide feedback that improves the system


2.5 Compliance Officer

Role: Reviews AI governance controls, evidence, and compliance posture across all Blaze-powered solutions.

Key touchpoints: AI Governance Dashboard, Evidence Inventory, Control Status, Audit Trail

Journey: 1. Review overall compliance score across all solutions 2. Drill into per-framework compliance (ISO 42001, EU AI Act, NIST AI RMF, SOC 2) 3. Verify evidence completeness — every control has machine-verifiable evidence 4. Review AI risk register — risk tiers, DMN classifications, residual risk 5. Monitor KRI trends (drift events, bias breaches, incident containment time) 6. Review quarterly board report generated by the platform 7. Conduct spot checks on evidence integrity (SHA-256 hash chain verification) 8. Respond to AI incidents via the incident response playbook

Needs: Single-pane compliance visibility, evidence integrity verification, regulatory mapping drill-down, automated board reporting, incident workflow


2.6 Customer Admin

Role: Manages their organization's tenant within the Blaze platform. Configures users, integrations, and solution-specific settings.

Key touchpoints: Admin UI (Org/Tenant view, Credentials, SSO Config, Platform Settings)

Journey: 1. Complete onboarding wizard (industry, tier, PM tool, Git provider, LLM providers) 2. Configure SSO for their organization (SAML/OIDC via Descope) 3. Invite team members and assign roles 4. Connect external systems (Jira, ServiceNow, ADO, Salesforce) 5. Configure compliance frameworks relevant to their industry 6. Monitor solution health and usage 7. Manage cost and billing 8. Request new solution instances or capability expansions

Needs: Self-service onboarding, SSO configuration, user management, integration setup, usage visibility, cost control


Part 3: Platform Capabilities

Epic E1: Multi-Tenant Cloud Platform

Priority: P0 (Foundation) Source: Existing Blaze admin UI + KMFlow RLS + Change Prisma models

The platform manages a hierarchy of Organizations > Tenants > Projects, each with isolated infrastructure, RBAC, and data scoping.

User Stories

US-E1-01: As a Platform Operator, I want to provision a new organization via a guided wizard, so that new customers can onboard in minutes, not days.

Feature: Organization Onboarding Wizard
  Scenario: Complete 4-step organization onboarding
    Given the admin is on the onboarding wizard
    When they select industry "Banking"
    And enter organization name "Acme Financial"
    And select plan tier "Enterprise"
    And configure PM tool "Jira" and Git provider "GitHub"
    And accept terms of service
    Then an organization "acme-financial" is created
    And compliance frameworks "SOC2, PCI-DSS, SOX, GLBA" are auto-selected
    And a default tenant is provisioned
    And the admin receives a confirmation with next steps

US-E1-02: As a Platform Operator, I want to provision tenant namespaces with full data isolation, so that each tenant's data is completely separated at the database, network, and storage layers.

Feature: Tenant Data Isolation
  Scenario: PostgreSQL Row-Level Security enforces tenant isolation
    Given tenant "acme-dev" exists with engagement_id "uuid-123"
    And tenant "beta-corp" exists with engagement_id "uuid-456"
    When a user in tenant "acme-dev" queries the evidence table
    Then only records with engagement_id "uuid-123" are returned
    And zero records from "uuid-456" are visible
    And the RLS policy is enforced at the database level, not application level

US-E1-03: As a Platform Operator, I want to manage solution namespaces (start, stop, scale, deploy) from the admin UI, so that I can control infrastructure without kubectl access.

Feature: Solution Namespace Management
  Scenario: Scale up a suspended solution
    Given solution "kmflow" in namespace "sol-blaze--kmflow--acme" is suspended
    When the operator clicks "Start" on the solutions card
    Then EKS nodes are woken if needed
    And component groups scale up in dependency order (datastores -> knowledge -> engine -> application)
    And the solutions card shows "Running" with healthy pod count

US-E1-04: As a Customer Admin, I want to configure SSO for my organization, so that my team can authenticate with our corporate identity provider.

Feature: SSO Configuration
  Scenario: Enable SAML SSO for an organization
    Given organization "acme-financial" has SSO_ENABLED = "false"
    When the admin configures SAML with IDP URL and entity ID
    And sets SSO_ENABLED to "true" via wrangler secrets
    Then users from acme-financial authenticate via SAML
    And the OTP fallback remains available for non-SSO users
    And session cookies are scoped to the organization domain

US-E1-05: As a Platform Operator, I want to view cost attribution by namespace, solution, and tenant, so that I can bill customers accurately and identify cost anomalies.

Feature: Cost Attribution Dashboard
  Scenario: Monthly cost report with namespace breakdown
    Given AWS Cost Explorer data is available for the current month
    When the cost report view loads
    Then it shows total monthly spend, daily rate, and spot savings percentage
    And breaks down cost by service (EKS, EC2, S3, CloudWatch)
    And attributes cost to each namespace/solution
    And flags any service with cost increase > $0.50 from baseline

Epic E2: Knowledge Base Foundation

Priority: P0 (Foundation) Source: KMFlow KB service + Change DocumentIngestionService + SLA pgvector schema

The Knowledge Base is the foundational data layer for every Blaze solution. It consists of 4 stores (PostgreSQL/pgvector for relational + vector, Neo4j for graph, Redis for cache, MinIO for objects) plus a FastAPI KB service that provides unified access.

User Stories

US-E2-01: As a Developer, I want every solution to automatically deploy the 4-store KB stack, so that I don't need to provision databases manually.

Feature: Automatic KB Stack Deployment
  Scenario: New solution gets full KB stack via Helm chart
    Given the blaze-solution Helm chart is configured for solution "my-app"
    When the solution is deployed via blaze-platform.sh
    Then PostgreSQL with pgvector extension is running
    And Neo4j 5-community with APOC is running
    And Redis 7-alpine is running
    And MinIO is running
    And the KB FastAPI service is running and connected to all 4 stores
    And the /health endpoint reports all stores as "ok"

US-E2-02: As a Developer, I want to query the knowledge graph by node type, relationship, and semantic search, so that my services can consume knowledge from the KB.

Feature: Knowledge Graph Queries
  Scenario: Semantic search returns relevant fragments
    Given the KB contains 50 embedded knowledge fragments
    When I search for "account validation rules"
    Then the top results are fragments related to account validation
    And each result includes the source node ID, similarity score, and text
    And results are ranked by cosine similarity to the query embedding

  Scenario: Graph traversal returns relationships
    Given node "PROC-COSGN00C" exists with HAS_RULE relationships
    When I query relationships for source "PROC-COSGN00C" with type "HAS_RULE"
    Then I receive the linked BusinessRule nodes
    And each relationship includes its type and properties

US-E2-03: As a Developer, I want the KB to be seeded idempotently from a graph export, so that redeployments don't duplicate data.

Feature: Idempotent Knowledge Seeding
  Scenario: Seed loader runs twice without duplicating nodes
    Given the knowledge seed has been run once, creating 20 nodes
    When the seed loader runs again with the same graph_export.json
    Then no duplicate nodes are created
    And the node count remains 20
    And MERGE operations are used instead of CREATE

Epic E3: BPMN Process Engine

Priority: P0 (Foundation) Source: KMFlow cib7 + Change cib7 + SLA Camunda 8 + SLA deploy-and-migrate.sh

Every Blaze solution executes its business logic through BPMN processes and DMN decision tables on Camunda (supporting both cib7/Camunda 7 and Camunda 8 Zeebe).

User Stories

US-E3-01: As a Solution Architect, I want to deploy BPMN/DMN files to the Camunda engine via a REST API, so that process definitions are version-controlled and deployable without manual intervention.

Feature: BPMN/DMN Deployment
  Scenario: Deploy process definitions with duplicate filtering
    Given 3 BPMN files and 2 DMN files exist in the processes/ directory
    When I call POST /api/v1/processes/deploy
    Then all 5 resources are deployed to the Camunda engine
    And the response includes the deployment ID and resource count
    And redeploying the same files skips them (duplicate filtering)

US-E3-02: As an End User, I want to start a process instance and complete assigned tasks, so that I can execute business workflows through the platform.

Feature: Process Instance Lifecycle
  Scenario: Start a process and complete a user task
    Given process "customer-onboarding" is deployed
    When I start an instance with business key "CUST-001"
    Then a process instance is created and running
    And a user task appears for the "intake-team" candidate group
    When I claim and complete the task with variables
    Then the process advances to the next step

US-E3-03: As a Platform Operator, I want Camunda to support both C7 (cib7) and C8 (Zeebe) engines, so that solutions can choose the deployment model that fits their needs.

Feature: Dual Engine Support
  Scenario: Solution uses cib7 (Camunda 7 on-premise)
    Given the solution's docker-compose includes cib7 service
    Then the API communicates via /engine-rest REST API
    And external task workers poll via fetchAndLock

  Scenario: Solution uses Camunda 8 Cloud
    Given the solution is configured for C8 with OAuth2 credentials
    Then the API communicates via Zeebe REST API
    And the Camunda auth module handles token caching
    And the deploy-and-migrate script handles deployment + live migration

Epic E4: Evidence Ingestion Pipeline

Priority: P0 (Foundation) Source: KMFlow 25+ parsers + Change DocumentIngestionService + SLA contract analysis pipeline

Every Blaze solution ingests knowledge through a standardized pipeline: upload -> classify -> parse -> fragment -> embed -> store in KB.

User Stories

US-E4-01: As a Developer, I want a parser factory that automatically selects the right parser based on file type, so that I don't need to know which parser to call for each document.

Feature: Automatic Parser Selection
  Scenario: Upload a PDF document
    Given the ingestion pipeline is running
    When I upload "quarterly-report.pdf" (application/pdf)
    Then the parser factory selects the PDF parser
    And the document is parsed into text fragments
    And each fragment is embedded as a 768-dimension vector
    And fragments are stored in pgvector and linked to Neo4j nodes

  Scenario: Upload a COBOL source file
    Given the COBOL parser extension is registered
    When I upload "COSGN00C.cbl" (text/plain with .cbl extension)
    Then the parser factory selects the COBOL parser
    And program structure, CALL targets, working storage, and business rules are extracted

US-E4-02: As a Solution Architect, I want to register domain-specific parsers without modifying the platform, so that each solution can ingest its own file types.

Feature: Parser Extension Interface
  Scenario: Register a custom parser for ARIS AML files
    Given the platform parser factory supports PDF, Excel, CSV, images, audio, video
    When the KMFlow solution registers an ARIS parser for .aml files
    Then .aml files uploaded to KMFlow are parsed by the ARIS parser
    And other solutions without the ARIS parser reject .aml files with "unsupported format"

Epic E5: RAG Copilot

Priority: P1 (High) Source: KMFlow HybridRetriever + Copilot + Change ChatService + KnowledgeService

Every Blaze solution includes an AI copilot that answers questions using RAG (Retrieval-Augmented Generation) against the solution's knowledge base.

User Stories

US-E5-01: As an End User, I want to ask questions about my domain and receive evidence-based answers with citations, so that I can trust the AI's responses.

Feature: RAG Copilot with Citations
  Scenario: Ask a question about account validation
    Given the KB contains evidence about CardDemo account validation rules
    When I ask "What rules govern account status changes?"
    Then the copilot retrieves relevant fragments via hybrid search (vector + keyword + graph)
    And generates a response citing specific evidence sources
    And each citation links back to the originating KB node
    And the response streams via Server-Sent Events

  Scenario: Copilot respects data residency
    Given the engagement has data residency restriction "EU_ONLY"
    When I ask a question
    Then the LLM call is routed to the local Ollama instance
    And no data is sent to external API endpoints

US-E5-02: As a Developer, I want the copilot to support multi-provider LLM routing with local-first fallback, so that the system works in air-gapped environments and optimizes for cost.

Feature: Multi-Provider LLM Gateway
  Scenario: LLM provider fallback chain
    Given the LLM gateway is configured with providers: [Ollama, Anthropic, OpenAI]
    When Ollama is unavailable
    Then the gateway falls back to Anthropic Claude
    And if Anthropic is unavailable, falls back to OpenAI
    And every LLM call is logged to the AI audit trail with provider, model, tokens, and latency

Epic E6: Feedback & Continuous Learning

Priority: P1 (High) Source: KMFlow suggestion_feedback.py + SLA feedback-widget.js + Change (planned)

Every API response includes a feedback URL. User corrections flow back into the KB. The solution gets measurably smarter over time.

User Stories

US-E6-01: As an End User, I want to provide thumbs up/down feedback on any API response, so that my corrections improve the system.

Feature: Feedback Collection
  Scenario: Submit negative feedback with correction
    Given the API response includes X-Feedback-URL header
    When I submit rating=1 with correction "The account limit is $50,000 not $25,000"
    And the response was produced from KB node "RULE-COACTVWC-STATUS"
    Then a FeedbackEntry is created in the database
    And an Evidence node of type "user_correction" is created in Neo4j
    And it is linked to node "RULE-COACTVWC-STATUS" via HAS_CORRECTION relationship

US-E6-02: As a Platform Operator, I want to see a feedback improvement report showing KB growth over time, so that I can demonstrate the platform's continuous learning.

Feature: Feedback Improvement Report
  Scenario: Monthly improvement report
    Given 150 feedback entries exist in the last 30 days
    And 12 corrections have been promoted to BusinessRule nodes
    When I request GET /api/v1/feedback/report?days=30
    Then the report shows total_feedback=150, total_corrections=45, total_promotions=12
    And average_rating trend is improving (3.2 -> 3.8)
    And correction_rate and promotion_rate percentages are calculated

Epic E7: Connector Framework

Priority: P1 (High) Source: KMFlow BaseConnector + SLA task-sync-base + WAL + 5 connectors

Blaze provides a connector SDK for integrating with external systems. The framework handles authentication, retry logic, schema drift detection, sync checkpointing, and reliable delivery via write-ahead log.

User Stories

US-E7-01: As a Developer, I want a base connector class with retry logic and credential management, so that I can build integrations without reimplementing infrastructure.

Feature: Connector Framework
  Scenario: Build a Jira connector using the base framework
    Given the ConnectorFramework provides BaseConnector, @with_retry, CredentialProvider
    When I extend BaseConnector to implement JiraConnector
    Then I inherit: OAuth2/API-key credential management via CredentialProvider
    And automatic retry with exponential backoff via @with_retry
    And schema drift detection against the expected schema template
    And incremental sync checkpointing via Redis-backed sync cursors
    And reliable delivery via write-ahead log (WAL) with sequence numbers

US-E7-02: As a Solution Architect, I want pre-built connectors for common enterprise systems, so that I don't build integrations from scratch.

Feature: Pre-Built Connectors
  Scenario: Connect to ServiceNow
    Given the platform includes a ServiceNow connector
    When I configure credentials and table mappings
    Then the connector syncs incident/change/SLA records
    And maps ServiceNow fields to the solution's canonical model via YAML schema templates
    And detects schema drift when ServiceNow's schema changes

Epic E8: Data Governance

Priority: P1 (High) Source: KMFlow data governance framework + SLA evidence infrastructure + Change GDPR (planned)

The platform provides data catalog, policy enforcement, evidence lineage, GDPR compliance, and regulatory evidence storage.

User Stories

US-E8-01: As a Compliance Officer, I want every evidence artifact to have a SHA-256 integrity hash and retention tier, so that evidence is tamper-evident and retention-compliant.

Feature: Evidence Integrity
  Scenario: Evidence artifact with integrity hash
    Given I collect test results as phase-2 evidence
    When the evidence is written to evidence/development/feature-x/phase-2-test-results.json
    Then a .sha256 sidecar file is created with the SHA-256 hash
    And the DMN-15 retention routing table assigns a retention tier (7yr/3yr/1yr)
    And the evidence metadata includes processInstanceId, phase, timestamp, and regulatoryTags

US-E8-02: As a Compliance Officer, I want GDPR right-of-erasure to be enforceable across all solutions, so that we can comply with data subject requests.

Feature: GDPR Right of Erasure
  Scenario: Erasure request for a user
    Given user "jane@acme.com" has data across 3 solutions
    When a GDPR erasure request is submitted with a 30-day grace period
    Then after the grace period, the erasure worker anonymizes all PII fields
    And the audit trail records the erasure event
    And a confirmation is available for the data subject

Epic E9: Auth & Security

Priority: P0 (Foundation) Source: KMFlow auth middleware + Change auth module + SLA 5x CF Workers (duplicated)

A unified auth framework replaces the 3 separate implementations. One Descope OTP/SSO auth worker serves all solutions. Platform middleware handles JWT, RBAC, RLS, CSRF, rate limiting, security headers, audit logging, and PEP/PDP.

User Stories

US-E9-01: As a Platform Operator, I want a single auth worker template that protects any solution, so that I don't maintain 5+ copies of the same authentication code.

Feature: Unified Auth Worker
  Scenario: Deploy auth for a new solution
    Given the blaze-auth-worker template exists with configurable branding
    When I create a new worker for solution "my-app" with title and tagline
    Then the worker inherits: Descope OTP, JWKS validation, session cookies,
         rate limiting, CSRF protection, SSO (SAML/OIDC), email domain allowlist
    And the only customization is branding (title, tagline, subtitle)
    And all workers share the same codebase via the shared otp-auth library

US-E9-02: As a Developer, I want platform middleware that automatically enforces security on every request, so that I don't implement security headers, CSRF, rate limiting, and audit logging in each solution.

Feature: Platform Security Middleware Stack
  Scenario: Every HTTP request gets full security treatment
    Given the platform middleware stack is registered
    When any HTTP request is received
    Then it gets: X-Request-Id header (UUID v4 correlation)
    And security headers (CSP, X-Frame-Options, HSTS, X-Content-Type-Options)
    And CSRF validation (double-submit cookie)
    And rate limiting (Redis-backed sliding window)
    And audit logging (mutating requests logged with user, IP, endpoint, engagement)
    And RLS context set (PostgreSQL session variable for tenant isolation)

Part 4: PDLC Capabilities

Epic E10: Solution Creation Wizard

Priority: P0 (Critical) Source: Prior platform 12-step wizard + Blaze admin onboarding wizard + user requirement for BPMN orchestration

A BPMN-orchestrated wizard that guides solution architects through Define > Build > Deploy > Operate. The process itself runs on Camunda, produces CDD evidence at every step, and is instrumented for AI governance compliance.

User Stories

US-E10-01: As a Solution Architect, I want a guided wizard that walks me through defining a new solution, so that I follow the standard methodology every time.

Feature: Solution Creation Wizard — Define Phase
  Scenario: Complete the Define phase (Steps 1-4)
    Given I start the solution creation wizard
    When I complete Step 1 (Definition): name="Customer Onboarding", category="Operations", visibility="Organization"
    And Step 2 (Environment): tenancy="Isolated", billing="Per-seat"
    And Step 3 (Personas): define 3 roles with candidateGroups and permissions
    And Step 4 (Triggers): process starts on "Webhook from CRM" and "Manual via portal"
    Then a solution definition record is created in the registry
    And phase-1 CDD evidence is collected with SHA-256 hash
    And the Camunda process instance advances to the Build phase

US-E10-02: As a Solution Architect, I want to select platform capabilities from a catalog, so that I can compose my solution from reusable building blocks.

Feature: Platform Capability Catalog
  Scenario: Browse and select capabilities for a new solution
    Given the capability catalog lists: KB, Camunda, RAG Copilot, Feedback, Connectors, LMS, Voice Agent, Document Ingestion, OPA Governance
    When I select KB, Camunda, RAG Copilot, Feedback, and Document Ingestion
    Then the solution configuration includes these capabilities
    And deployment will provision all selected platform services
    And the solution inherits the platform middleware stack automatically

US-E10-03: As a Solution Architect, I want the Build phase to generate code artifacts from my source files, so that React components, FastAPI stubs, and database schemas are produced deterministically.

Feature: Solution Creation Wizard — Build Phase
  Scenario: Run code generation pipeline
    Given the solution has source fixtures (BMS maps, CICS transactions, DDL, JCL)
    When the Build phase executes the code generation pipeline
    Then BmsReactGenerator produces React form components from BMS maps
    And CicsFastapiGenerator produces FastAPI router stubs from CICS transactions
    And OpenApiGenerator produces OpenAPI 3.1 YAML from CICS transactions
    And DdlGenerator produces PostgreSQL DDL + Alembic migration from DDL
    And BpmnGenerator produces BPMN process models from JCL
    And a pipeline-manifest.json is created linking every artifact to its generator and fixture

Epic E11: Agentic SDLC Orchestration

Priority: P0 (Critical) Source: Existing Blaze 78-agent architecture + OpenTelemetry instrumentation

The PDLC orchestrates development through 78 specialized AI agents organized in a 3-tier hierarchy. Every agent invocation is instrumented with OpenTelemetry spans for observability and compliance.

User Stories

US-E11-01: As a Developer, I want the SDLC orchestrator to automatically invoke the right agents at each phase, so that I get comprehensive review without manually triggering each one.

Feature: Agentic SDLC Phase Orchestration
  Scenario: Phase 2 Development triggers parallel review agents
    Given I am in Phase 2 (Development) of the SDLC
    When I complete my implementation and tests pass
    Then the orchestrator invokes in parallel:
      | Agent | Purpose |
      | code-quality-reviewer | Code standards and patterns |
      | security-reviewer | Vulnerability and secrets scan |
      | test-coverage-analyzer | Coverage thresholds and TDD compliance |
      | architecture-reviewer | Structural integrity |
    And each agent produces a structured result with score and findings
    And all invocations are traced via OpenTelemetry spans
    And CDD evidence is collected for the phase transition

Epic E14: Agent Analytics Dashboard

Priority: P1 (High) Source: New — user requirement to showcase agentic PDLC as a product capability

A dedicated admin UI view that showcases agent usage, effectiveness, and continuous improvement. The agentic PDLC is itself a value proposition that must be visible and measurable.

User Stories

US-E14-01: As a Platform Operator, I want to see which agents are being used, how often, and how effective they are, so that I can demonstrate the agentic PDLC's value and identify improvement opportunities.

Feature: Agent Analytics Dashboard
  Scenario: View agent usage and effectiveness metrics
    Given the Agent Analytics view is open in the admin UI
    When the dashboard loads
    Then it shows for each of the 78 agents:
      | Metric | Description |
      | Invocation count | Total calls in the selected period |
      | Avg duration | Mean execution time |
      | Success rate | Percentage completing without error |
      | Findings produced | Count of issues/recommendations generated |
      | Cost | Estimated API cost (input + output tokens) |
    And agents are grouped by tier (Primary, Visible, Hidden)
    And a trend chart shows usage over time
    And a "Top Findings" section shows the most common issues across all agents

  Scenario: Continuous PDLC improvement tracking
    Given the platform has been running for 30+ days
    When I view the improvement trends
    Then I see: average PR review score trending up, average security findings trending down
    And time-to-merge trending down (agents catching issues earlier)
    And agent effectiveness score (findings that led to code changes / total findings)

Epic E15: Micro-Learning & Education

Priority: P1 (High) Source: Change LMS (full backend + frontend in worktree) + user requirement for full feature design

A contextual learning engine that delivers just-in-time education to every persona based on their current activity, skill level, and the knowledge graph.

User Stories

US-E15-01: As a Developer, I want to receive contextual learning prompts when I'm working on a task, so that I learn the platform's best practices as I work.

Feature: Just-in-Time Learning Prompts
  Scenario: Developer receives a methodology tip during TDD
    Given developer "jane" is writing tests in Phase 2
    And her profile shows she has not completed the "TDD Best Practices" module
    When she opens a test file
    Then a non-intrusive learning prompt appears:
      | type | METHODOLOGY_TIP |
      | title | "Red-Green-Refactor: Write the Failing Test First" |
      | priority | MEDIUM |
    And she can: VIEW (expand), DISMISS, or DONT_SHOW_AGAIN
    And her interaction is recorded for learning analytics

  Scenario: Prompt suppression after completion
    Given developer "jane" has completed the "TDD Best Practices" module
    When she opens a test file
    Then no TDD-related prompts appear
    And more advanced prompts (e.g., property-based testing) may appear based on maturity level

US-E15-02: As a Platform Operator, I want persona-based learning paths with certifications, so that users follow a structured progression from foundation to expert.

Feature: Learning Paths with Certifications
  Scenario: Foundation path for all users
    Given learning paths are configured: Foundation (all), Developer Advanced, Architect, Compliance
    When a new user "bob" is onboarded
    Then the Foundation path is assigned (7 modules, ~40 min)
    And modules include: Platform Overview, KB Fundamentals, BPMN Basics, CDD Principles, AI Governance, Feedback Loops, Security
    And each module has: content (VIDEO/DOCUMENT/QUIZ/INTERACTIVE), learning objectives, quiz
    And completion of all modules + passing quiz unlocks "Blaze Foundation" certification badge

  Scenario: Maturity-based progression
    Given user "bob" has completed Foundation (maturity level 1)
    When he is assigned the Developer role
    Then the "Developer Advanced" path unlocks (maturity level 2)
    And prerequisite validation ensures Foundation is complete
    And modules build on Foundation concepts

US-E15-03: As an End User, I want an embedded learning modal that appears in-context, so that I can learn without leaving my current workflow.

Feature: Embedded Learning Modal
  Scenario: Interactive scenario exercise during task completion
    Given user "sarah" is completing a change readiness assessment
    And the assessment form has a learning trigger for "Stakeholder Analysis"
    When the trigger fires
    Then an EmbeddedLearningModal appears with an InteractiveScenario
    And the scenario presents a branching decision tree
    And completing the scenario awards points toward her learning path
    And the modal can be minimized and returned to later

Part 5: AI Governance

Epic E16: AI Management System (AIMS)

Priority: P0 (Critical) Source: SLA AI governance program (101 controls, 13 BPMN processes, 5 DMN tables) + Blaze governance docs (33 documents)

The platform implements ISO 42001:2023 as a fully operational AI Management System. Controls are enforced by executable BPMN processes, risk is classified by DMN decision tables, and evidence is collected automatically at every control point.

User Stories

US-E16-01: As a Compliance Officer, I want every AI interaction in the platform to be classified by risk tier via DMN-9, so that governance controls are proportional to risk.

Feature: AI Risk Classification via DMN-9
  Scenario: Classify a new AI use case
    Given a developer is building a feature that uses Claude for code generation
    When the AI governance overlay activates
    Then DMN-9 evaluates 5 dimensions:
      | Dimension | Value |
      | Decision Materiality | 3 |
      | Credit/Capital Impact | 1 |
      | Model Complexity | 5 |
      | Data Sensitivity | 2 |
      | Autonomy Level | 4 (+2 GenAI modifier = 6) |
    And the output is: aiRiskTier="Tier 3", euAiActCategory="LIMITED_RISK", activePhasesCount=5
    And only 5 governance phases are activated (not the full 13)

US-E16-02: As a Compliance Officer, I want the 101-control catalog to be enforced as executable BPMN processes, so that governance is not just documentation but operational reality.

Feature: Governance-as-Code
  Scenario: AI-SP1 Risk Classification sub-process executes
    Given a new AI use case enters the governance pipeline
    When AI-SP1 (Risk Classification) executes
    Then controls C-AISP01-01 through C-AISP01-07 are enforced as BPMN tasks:
      | Control | Task | Evidence |
      | C-AISP01-01 | Complete AI risk questionnaire (44 questions) | Form submission |
      | C-AISP01-02 | EU AI Act category determination via DMN-9 | DMN evaluation log |
      | C-AISP01-03 | SR 11-7 model risk tier via DMN-9 | DMN evaluation log |
      | C-AISP01-07 | Register in AI model inventory | Inventory record |
    And each task produces evidence stored with SHA-256 hash
    And the evidence is linked to the control ID in the compliance graph

Epic E18: AI Observability

Priority: P0 (Critical) Source: Blaze OTel instrumentation + SLA Prometheus KRIs + Change Prometheus metrics

Every AI interaction across the platform is instrumented with OpenTelemetry traces, Prometheus metrics, and Phoenix AI observability. This is not optional — it is a regulatory compliance obligation under ISO 42001 and EU AI Act Art. 12.

User Stories

US-E18-01: As a Compliance Officer, I want every agent invocation to produce an OpenTelemetry trace, so that I can audit exactly what the AI did, when, and what it produced.

Feature: Agent Invocation Tracing
  Scenario: PR orchestrator trace with sub-agent spans
    Given the pr-orchestrator agent reviews PR #123
    When the orchestrator invokes 9 sub-agents in parallel
    Then a parent OTel trace is created for the PR review
    And each sub-agent produces a child span with:
      | Attribute | Value |
      | agent.type | e.g., "security-reviewer" |
      | agent.duration_ms | Execution time |
      | agent.findings_count | Issues found |
      | agent.tokens.input | Input token count |
      | agent.tokens.output | Output token count |
      | agent.model | e.g., "claude-opus-4-6" |
    And the trace is exported to Phoenix AI for visualization
    And Prometheus counters are incremented for agent usage metrics

Epic E20: AI Governance Dashboard

Priority: P1 (High) Source: New admin UI view — user requirement

A dedicated view in the admin UI showing AI governance posture across all solutions.

User Stories

US-E20-01: As a Compliance Officer, I want a single dashboard showing the AI governance posture across all Blaze solutions, so that I can identify compliance gaps at a glance.

Feature: AI Governance Dashboard
  Scenario: View overall compliance posture
    Given the AI Governance view is open in the admin UI
    When the dashboard loads
    Then it shows:
      | Section | Content |
      | Compliance Score | Overall percentage with per-framework breakdown (ISO 42001, EU AI Act, NIST, SOC 2) |
      | Control Status | 101 controls with met/partial/not_met status |
      | Evidence Inventory | Count of evidence artifacts by phase, with SHA-256 verification status |
      | KRI Trends | 7 Key Risk Indicators with 30-day trend charts |
      | Active AI Systems | Count registered in inventory, by risk tier |
      | Recent Incidents | AI incidents with severity, status, and containment time |
    And I can drill into any framework to see article/clause-level compliance
    And I can drill into any control to see its evidence and testing cadence

Part 6: Solution SDK

Epic E21: Parser Extension Interface

Priority: P2 (Medium) Source: KMFlow 25+ parsers + SLA contract parsers

Solutions register domain-specific parsers that extend the platform's ingestion pipeline.

User Stories

US-E21-01: As a Developer, I want to register a custom parser for my solution's file types, so that domain-specific documents are ingested into the KB.

Feature: Parser Extension Registration
  Scenario: KMFlow registers ARIS and Visio parsers
    Given the platform provides the parser factory with generic parsers (PDF, Excel, CSV, images)
    When KMFlow registers: aris_parser (.aml), visio_parser (.vsdx), xes_parser (.xes)
    Then files with those extensions are routed to KMFlow's parsers
    And parsed fragments flow through the standard pipeline (chunk -> embed -> store)
    And the parser extension does not affect other solutions

Epic E22: Ontology Extension Interface

Priority: P2 (Medium) Source: KMFlow ontology + SLA governance ontology

Solutions define their own node types and relationship types that extend the platform's knowledge graph schema.

User Stories

US-E22-01: As a Solution Architect, I want to define my solution's ontology as YAML, so that the knowledge graph schema matches my domain model.

Feature: Ontology Extension
  Scenario: COBOL migration defines mainframe entity types
    Given the platform ontology provides base types: Evidence, Fragment
    When the COBOL migration solution registers ontology.yaml with types:
      | Type | Description |
      | Process | A COBOL program or CICS transaction |
      | DataObject | A VSAM file or DB2 table |
      | BusinessRule | A business rule from COBOL logic |
    And relationship types: CALLS, ACCESSES, HAS_RULE, VALIDATES
    Then Neo4j constraints are created for each type
    And the KB service accepts queries for these types
    And other solutions' types do not conflict (namespace isolation)

Part 7: Reference Applications

7.1 KMFlow — Consulting Delivery Platform

Tier: Tier 1 (Full Knowledge Solution) Unique capabilities: 8-step PoV consensus algorithm (BRIGHT/DIM/DARK), TOM analysis engine, shelf data requests, client portal, assessment matrix, pattern library, survey bot, simulation & financial modeling, RACI derivation, knowledge forms coverage Platform consumed: All Layer 2 services + all Layer 3 PDLC + all Layer 4 governance

7.2 Change/ACMOS — AI-Orchestrated Change Management

Tier: Tier 1 (Full Knowledge Solution) Unique capabilities: SCML 7-phase methodology (Onboard→Diagnose→Design→Execute→Adopt→Evaluate), stakeholder analysis, sentiment sensing, capacity modeling, value stream mapping, content generation with governance, conversational assessments, adoption drift detection, wisdom harvesting Platform consumed: All Layer 2 services + LMS engine + voice agent + all Layer 4 governance

7.3 SLA — Software Lifecycle Automation

Tier: Tier 1 (Full Knowledge Solution) Unique capabilities: 8-phase governance lifecycle, contract analysis NLP, vendor management system, 5 external system connectors (Jira, ServiceNow, ADO, Teams, Smartsheet), 21 DMN decision tables, committee voting process, regulatory ingest pipeline (17 frameworks) Platform consumed: All Layer 2 services + all Layer 4 governance (this is where the AI governance program originated)

7.4 COBOL Migration — Mainframe Modernization

Tier: Tier 1 (Full Knowledge Solution) Unique capabilities: COBOL parser, BMS screen map transformer, CICS transaction stub generator, JCL-to-BPMN generator, DB2-to-PostgreSQL migration, CardDemo knowledge graph (11 programs, 4 data stores, 5 business rules) Platform consumed: All Layer 2 services + code generation pipeline + all Layer 4 governance


Appendix A: Cross-Repo Extraction Priority

The following capabilities must be extracted from their current locations into the Blaze platform to eliminate duplication:

Immediate (P0)

  1. Auth framework (3 implementations → 1 platform library)
  2. 4-store KB stack standardization
  3. Security middleware stack (CSRF, rate limiting, security headers, request ID, audit)
  4. Camunda engine support (C7 + C8)
  5. BPMN validator + security scanner (from SLA)
  6. BDD BPMN testing framework (from SLA)

Near-term (P1)

  1. LMS engine (from Change worktree)
  2. Notification service (from Change — only app with email sending)
  3. Hallucination detector + reranker (from Change graphrag-ml)
  4. Connector framework merger (KMFlow BaseConnector + SLA task-sync-base)
  5. AI governance OPA policies (from Change)
  6. Regulatory ingest pipeline (from SLA)
  7. Evidence infrastructure (SHA-256 hash chain, retention tiers)

Medium-term (P2)

  1. PEP/PDP field-level access control (from KMFlow)
  2. Voice agent framework (from Change)
  3. Camunda Modeler sync + Optimize export (from SLA)
  4. Schema library unification
  5. Demo persona system standardization

Document Version: 1.0.0 Next Review: Upon completion of Blaze 3.0 sprint planning