Human-Layer Risks
Signal Spam & Noise Injection
Attack: Malicious actors submit low-quality or random signals to dilute the signal pool and degrade AI inference.
Mitigations:
Staking requirement in $AXORA for signal submission
ZK-weighted reputation penalizes consistently low-impact signals
Signal orthogonality scoring (crowd-echo signals lose weight)
AI-driven anomaly detection for entropy spikes
Residual Risk: Low — spam is economically irrational over time.
Coordinated Signal Manipulation
Attack: Groups collude to submit correlated signals to bias strategy direction.
Mitigations:
Correlation clustering detection
Penalization of highly synchronized signal cohorts
Weight decay on overrepresented viewpoints
No guarantee that any signal leads to execution
Key Insight: Because contributors cannot observe outcomes or each other, coordination is fragile and expensive.
Insider Signal Poisoning
Attack: Sophisticated insiders submit misleading signals to profit externally (e.g., front-running AxoraAI).
Mitigations:
Signals are non-executable and low-dimensional
AI aggregates across many contributors
Execution timing is unknown and variable
No direct mapping between signal and trade
Residual Risk: Moderate but bounded—insiders cannot deterministically control outcomes.
Last updated