Research Infrastructure

Decades of knowledge, encoded into AI-native infrastructure

We don't just use AI — we build the connective tissue between foundation models, real-time market data, and proprietary trade intelligence. Here's how the stack works.

Foundation Models

Choosing the right mind for each task

We don't lock into a single model. Multi-model orchestration — matching the right model to each research task — is itself a competitive advantage.

Anthropic Claude

Primary reasoning engine. Long-context processing for financial reports, causal chain analysis, thesis stress-testing, and bilingual cross-referencing. The backbone of our research workflow.

OpenAI (GPT-4.1 / o3 / o4-mini)

Complementary model for multimodal tasks — chart parsing, scanned document analysis, and complex quantitative reasoning where specialized capabilities matter.

Google Gemini

Efficient preprocessing and triage engine. Initial news filtering, routine summarization, metadata extraction — concentrating budget on the analytical core while using cost-effective models for preprocessing.

Data Layer & MCP

Connecting the world's financial data

Through Model Context Protocol (MCP), our AI talks directly to data sources — turning "human queries data, feeds AI" into "AI queries data autonomously."

Global Financial Data

Global equities, fixed income, derivatives, ESG data. The foundation for cross-border valuation comparisons and global fundamental analysis.

AKShare

China market data core — A-shares, HK equities, futures, funds, macro indicators. Open-source, API-native, purpose-built for AI integration.

Readwise

Research ingestion pipeline — auto-collecting sell-side reports, news, papers, social signals. AI extracts key views and maps consensus vs. our internal thesis.

Dify

Workflow orchestration — connecting models, data sources, and tools into repeatable automated research pipelines. The glue layer of our entire stack.

Cloud & Compute

Where intelligence runs

Cross-border finance demands specific cloud architecture: data sovereignty, compliance, latency optimization, and multi-region consistency.

Amazon Web Services

Global infrastructure backbone. Bedrock for Claude deployment, Data Exchange for market feeds, cross-region data synchronization.

Microsoft Azure

Azure OpenAI Service for enterprise-grade model deployment. Microsoft 365 ecosystem integration for research workflow automation.

Google Cloud Platform

BigQuery for large-scale market data analysis. Vertex AI for custom model training. Global network infrastructure for latency-sensitive processing.

Workflow Architecture

From question to conviction

An end-to-end research pipeline that connects signal detection, context assembly, multi-model analysis, and three-lens review.

1

Signal detection

Automated monitoring scans market data, news, social media, and trade flow data. Pre-defined thresholds trigger alerts.

2

Context assembly

AI via MCP automatically pulls historical data, sell-side consensus, related assets, and macro context into a unified analysis workspace.

3

Multi-model analysis

The primary LLM handles core reasoning (causal chains, thesis stress-testing). Supporting models handle data processing and quantitative analysis.

4

Three-lens review

Fundamental analysis checks thesis quality. Risk assessment evaluates exposure. Market structure analysis assesses dynamics. AI assists the first two; human makes the final call.

5

Output & action

Structured research memo generated. Monitoring rules updated. Bayesian priors adjusted for the next cycle.