1. THE 7 RHETORICAL
ARCS
The engine decomposes all incoming semantic data into a 7-layer stack. This allows the AI to
differentiate
between the *action* being requested and the *value* or *intent* behind it.
01. ESSENCEOntology & Identity
02. FORMPhysical / Syntax Representation
03. ACTIONDynamic Logic / Vectors
04. FRAMEContextual Boundaries
05. INTENTMotivation / Goal State
06. RELATIONNetworked Dependencies
07. VALUEPriority & Ethical Weight
// HIGH-DENSITY DECOMPOSITION LOGIC
// --------------------------------
SYSTEM_PROMPT = """
Extract 7 canonical rhetorical arcs:
- Essence, Form, Action, Frame, Intent, Relation, Value
Respond only in valid JSON.
"""
def decompose_prompt(user_input):
# Logic to collapse token window into 7-tier stack
return openai.ChatCompletion.create(messages=[{"role":"system", "content":SYSTEM_PROMPT}, ...])
2. THE TEMPORAL TRIAD
Processing occurs across three macro-temporal phases, ensuring logical consistency from perception to
inception.
// TEMPORAL PHASE ARCHITECTURE
// ---------------------------
// PHASE 1: ID_VERIFICATION (Ontological Check)
// PHASE 2: INCEPTION_LAYER (Vector Injection)
// PHASE 3: EXECUTION_GRID (Recursive Output)
def process_triad(self, phase_index: int, arc_data: Dict[str, dict]):
# CANONICAL TRIADIC PROCESSING
for phase_index, phase_name in enumerate(MACRO_PHASES):
processor = PhaseProcessor(llm_interface, modifier_matrix, user_input)
results = processor.process(phase_index, arc_data)
all_phase_results[phase_name] = results
def evaluate_permutations(self, triad: Dict[str, Dict]):
# DEFENSIBLE PRIOR ART: ROLE MAPPING
role_perms = list(itertools.permutations(["Risk", "Reward", "Relation"]))
# ... scoring logic for all 6 permutations ...
3. RELEVANCE SCORING
(CORAL TPU)
The engine benchmarks every arc against the loaded domain template using high-speed edge acceleration.
// TPU ACCELERATION PROTOCOL
// -------------------------
// INPUT: Arc_Embeddings, Template_Tensor
// PROCESS: Cosine_Similarity_Matrix * Weighting_Bias
// OUTPUT: Relevance_Score [-3.0 to +3.0]
def score_relevance(self, arc_text: str, template_text: str) -> float:
# COSINE SIMILARITY SCORING VIA TPU
embeddings = self.coral_interface.embed([arc_text, template_text])
dot_product = np.dot(embeddings[0], embeddings[1])
norm = np.linalg.norm(embeddings[0]) * np.linalg.norm(embeddings[1])
return (dot_product / norm) * 3.0