AI Agents & Multi-LLM Analysis
The Confluence Engine is powered by a distributed layer of AI agents — autonomous, task-specific models that continuously analyze assets, scan global data, and generate forward-looking insights.
At its core is a multi-LLM architecture. Instead of relying on a single AI model or vendor, Tharwa’s agent layer runs multiple commercial LLMs in parallel, including OpenAI, Claude, Gemini, and others. This setup allows the protocol to capture broader signal coverage, reduce model bias, and verify consistency across outputs.
The result is a network of AI analysts, each independently evaluating macro factors, asset performance, and market signals, then converging on a unified set of asset-level recommendations.
Why AI? Why Now?
Real-world assets don’t move based on blockchain activity. They move based on geopolitics, inflation cycles, rate decisions, policy changes, and economic data releases. Monitoring this data manually or worse, reacting after the fact: puts portfolios at a disadvantage.
Tharwa uses AI agents to:
Continuously monitor relevant macro and microeconomic inputs
Generate structured reports on each asset in the portfolio
Flag risks, shifts, or opportunities before they impact performance
Feed this intelligence into the portfolio optimizer layer
This AI-first approach makes the Confluence Engine capable of seeing what most on-chain systems miss and adjusting proactively.
How It Works
Each AI agent is assigned to an asset or macro category (e.g., gold, real estate, oil, T-bills, or regional market conditions). These agents follow a multi-step process:
Information Gathering
Agents use browser-based tools and APIs to ingest live data from sources like central banks, commodity markets, macro news, or real estate indices.
Multi-LLM Evaluation
Each agent sends structured prompts to multiple LLMs (e.g., “What macro conditions will affect gold in the next 30 days?”).
The responses are returned, parsed, and evaluated for signal consistency and confidence.
Signal Synthesis
The agent compares outputs across models and produces a combined “asset intelligence packet”, a snapshot of drivers, opportunities, and forward-looking conditions.
Feed into Optimization Layer
These packets are delivered to the quantitative layer, which uses them to enhance portfolio allocation decisions.
This process happens in rolling intervals, allowing the system to refresh its view on the world regularly: much faster than any human analyst team could sustain.
Benefits of a Multi-LLM Setup
Redundancy No single model dominates the decision-making process. If one model misses or misinterprets a signal, others may catch it.
Reduced Bias Each LLM is trained on different datasets and has unique reasoning patterns. Running them together smooths out outliers.
Faster Iteration Agents can cross-check insights in parallel and flag anomalies across timeframes or regions.
Explainability Output packets are logged, auditable, and can be reviewed by humans or fed into dashboards. These are not black-box decisions.
AI-Driven Asset Monitoring and Optimization
A multi-LLM agent architecture is utilized for each asset in the basket, leveraging multiple commercial models such as OpenAI, Claude, Gemini, and DeepSeek. Each model periodically returns an asset analysis, including:
key performance drivers
recommended actions
supporting rationale
The system uses a Chromium-based browser to navigate and extract relevant web data (e.g. news, macro forecasts), which is then parsed and embedded via LLM API endpoints to enrich the analysis. This enables dynamic market-aware inputs into our optimization layer.
AI Output Example – Gold Market Analysis
Each AI agent is tasked with extracting real-time asset insights and producing structured outputs that include price trends, key market drivers, and suggested portfolio actions. Here's a simplified example of the output from one of our gold market agents:
"gold_price_analysis": {
"January_2025": {
"average_price_usd_per_oz": 2707.61,
"highest_price_usd_per_oz": 2798.46,
"lowest_price_usd_per_oz": 2623.91,
"monthly_change_percent": -1.7,
"key_drivers": [
"Stable economic indicators",
"Anticipation of upcoming central bank meetings",
"Modest inflation expectations"
],
"recommended_action": "hold",
"rationale": "Market was consolidating, awaiting clearer signals. No strong bullish or bearish catalysts."
},
"February_2025": {
"average_price_usd_per_oz": 2896.43,
"highest_price_usd_per_oz": 2951.42,
"lowest_price_usd_per_oz": 2813.34,
"monthly_change_percent": +7.0,
"key_drivers": [
"Increased central bank gold purchases",
"Geopolitical tensions influencing safe-haven demand",
"Fluctuations in currency markets"
],
"recommended_action": "buy",
"rationale": "Strong upward momentum with institutional demand signals; still room for further upside."
}
}
Confluence Engine
The Confluence Engine integrates the results from both AI analysis and quantitative optimization to suggest portfolio rebalancing strategies. Initially, this process is supervised, with the engine’s output reviewed and executed by domain experts.
However, the long-term vision is for the Confluence Engine to autonomously execute rebalancing decisions. With Tharwa's inclusion of on-chain real-world assets in its initial asset basket, it is feasible to carry out these transactions directly on-chain, enabling seamless and automated portfolio adjustments.
An early version of the confluence engine algorithm is illustrated below:
function CONFLUENCE_ENGINE(asset_list, start_date, params):
returns = LOAD_AND_PREPROCESS_ASSET_DATA(asset_list, start_date)
on_chain_returns = LOAD_ONCHAIN_DATA(on_chain_tokens)
all_returns = ALIGN_RETURNS([returns, on_chain_returns])
for asset in asset_list:
embed[asset] = BROWSE_AND_EMBED_NEWS_SOCIAL(asset)
for asset in asset_list:
for model in LLM_MODELS:
response = CALL_LLM(model, asset, embed[asset], recent_prices[asset])
llm_analysis[asset][model] = PARSE_LLM_RESPONSE(response)
for asset in asset_list:
llm_action[asset] = AGGREGATE_LLM(llm_analysis[asset])
mean_ret = MEAN_RETURNS(all_returns)
weights = SOLVE_CVaR_OPTIM(all_returns, mean_ret, params)
final_alloc = CONFLUENCE_DECISION(llm_action, weights)
if manual_mode:
PRESENT(final_alloc)
if APPROVED():
EXECUTE_TRADES(final_alloc)
else:
EXECUTE_TRADES(final_alloc)
return final_alloc
end function
Example Use Case
Last updated