Workflow Architecture · Feb 20, 2026

From Signal to Analysis: Inside KSINQ's AI Research Workflow

Tools are necessary but not sufficient. What creates research value is how these tools are orchestrated into a repeatable, disciplined research process. This article walks through the KSINQ research workflow end-to-end: from the moment a signal appears to the moment a structured research memo is produced.

Why Workflow Matters More Than Tools

Every component we have described in this section — Claude for reasoning, the multi-model roster, MCP for data connectivity, AKShare and global financial data for cross-border data — is a tool. Tools are necessary but not sufficient. What creates research value is how these tools are orchestrated into a repeatable, disciplined research process that consistently converts raw signals into actionable analysis.

This article walks through the KSINQ research workflow end-to-end: from the moment a signal appears to the moment a structured research memo is produced. The workflow has five stages. Each stage has a defined input, a defined output, and a defined quality gate. Nothing moves to the next stage without passing the gate.

Stage 1 — Signal Detection

The workflow begins with automated monitoring. Our signal detection layer continuously scans multiple data sources through MCP connections: market price movements from AKShare and global financial data, news flow from Readwise, QDII premium levels, freight rate indices, and macro indicators. When a pre-defined threshold is triggered — for example, a QDII premium exceeding its 90th percentile historical level, or a divergence between physical and paper commodity prices exceeding two standard deviations — the system generates a Signal Alert.

Signal Alerts are not research conclusions. They are starting points. The alert says “something unusual is happening” — it does not say “here is what it means.” The quality gate at this stage is specificity: every alert must identify the exact data point, the threshold that was breached, and the historical context for the breach. Vague alerts like “market volatility increasing” do not pass the gate.

A Dify workflow orchestrates this stage, running scheduled queries against our data sources and applying threshold logic. The output is a prioritized list of signals that arrives each morning before the researcher begins work.

Stage 2 — Context Assembly

When a signal passes the detection gate, the next stage assembles the analytical context needed to evaluate it. This is where MCP’s value is most visible: Claude automatically queries relevant data sources to build a complete picture around the signal.

For example, if the signal is an unusual spike in the premium on a QDII US-index ETF, the context assembly stage pulls: the historical premium series for that fund and comparable funds (AKShare), the underlying index performance and fund NAV (the global financial data layer), recent SAFE statements or QDII quota data (Readwise-ingested regulatory filings), and sell-side commentary on QDII flows (Readwise).

The quality gate at this stage is completeness: the assembled context must cover the signal from at least three independent angles (price data, fundamental data, and qualitative information). If the context is insufficient — for instance, if no recent regulatory commentary exists — the gap is explicitly flagged rather than papered over.

Stage 3 — Multi-Model Analysis

With the context assembled, the analytical work begins. This stage is where the multi-model architecture earns its keep.

Claude performs the core analytical reasoning: constructing the thesis (“the QDII premium spike reflects a structural shift in offshore allocation demand, not a temporary arbitrage dislocation”), building the evidence chain (A→B→C with each link labeled as fact or assumption), and identifying the consensus view and where our analysis diverges from it. This is Howard Marks’ second-level thinking, operationalized.

If the thesis involves quantitative elements — a valuation model, a probability-weighted scenario analysis, an asymmetry calculation — the relevant computation routes to the OpenAI o-series for mathematical verification. The result feeds back into Claude’s reasoning as validated input.

If the signal involved visual data — a chart pattern, a shipping manifest image, satellite imagery — GPT-4.1 extracts the structured data, which then enters Claude’s analytical context.

The quality gate at this stage is the falsification criterion. Before the analysis proceeds to the next stage, the thesis must include a defined, observable, time-bounded condition under which it will be abandoned. If the researcher cannot articulate this condition — if the thesis is unfalsifiable — it does not advance. This is the Popperian discipline embedded in the workflow.

Stage 4 — Three-Lens Review

The thesis now undergoes the Triple-Perspective review that is the methodological core of KSINQ’s research process. AI assists the first two lenses; human judgment governs all three.

The fundamental analysis lens checks thesis quality: Is the evidence chain logically sound? Does the thesis clearly state where it diverges from consensus? Are the assumptions explicitly identified? Claude performs an initial check against these criteria, flagging potential weaknesses.

The risk assessment lens examines downside: What is the probability and impact of the core thesis failing? Does the falsification criterion adequately cover the risk space? Is the assumption set robust to macro shocks? Claude constructs the adversarial case — the strongest argument against the thesis — and presents it alongside the thesis. A human analyst then renders judgment.

The market structure lens assesses the external conditions of the thesis: liquidity conditions, cross-asset correlations, policy lags, and catalyst windows. This lens is primarily human-driven, as market-structure judgment requires feel that models do not yet reliably provide.

The quality gate is unanimous passage: the thesis must pass all three lenses. A thesis that is analytically elegant but carries unacceptable risk does not advance. A thesis with controlled risk but unfavorable market conditions does not advance. Only theses that survive all three independent challenges enter the final research output.

Stage 5 — Output and Archival

The surviving thesis is rendered into a structured research memo with a standardized format: Executive Summary, Thesis, Evidence Chain, Supply Chain Signal, Risk Assessment (including falsification criteria), View Summary, and Timeline. This memo is the research document.

Simultaneously, the workflow triggers monitoring rules tied to the thesis: if the falsification criterion involves a specific data threshold, that threshold is added to the Stage 1 signal detection layer for continuous monitoring. If the thesis has a defined catalyst date, a calendar alert is created. The thesis is not static — it is a living hypothesis under continuous surveillance, consistent with our Popperian commitment to provisionality.

Finally, the Bayesian prior on the relevant theme is updated. If the thesis involves copper supply dynamics, our model’s prior assumptions about copper are adjusted to reflect the new analysis. Over time, this creates an evolving knowledge base that accumulates analytical insight from every completed research cycle.