Cryptonews

Malicious Network Devices Pose Emerging Risk to Crypto Industry's Technical Experts

Source
cryptonewstrend.com
Published
Malicious Network Devices Pose Emerging Risk to Crypto Industry's Technical Experts

Table of Contents A team from the University of California has uncovered significant security vulnerabilities in certain third-party artificial intelligence routing platforms that enable the theft of cryptocurrency credentials and insertion of harmful code into development environments. 26 LLM routers are secretly injecting malicious tool calls and stealing creds. One drained our client $500k wallet. We also managed to poison routers to forward traffic to us. Within several hours, we can directly take over ~400 hosts. Check our paper: https://t.co/zyWz25CDpl pic.twitter.com/PlhmOYz2ec — Chaofan Shou (@Fried_rice) April 10, 2026 The research findings appeared in a newly released academic paper examining what investigators termed “malicious intermediary attacks” targeting the large language model (LLM) infrastructure ecosystem. These AI routing platforms function as intermediary services positioned between software developers and major AI service providers such as OpenAI, Anthropic, and Google. Their primary function involves managing and directing API traffic across various AI platforms. The fundamental security flaw stems from these routers terminating encrypted connections. This architectural design grants them complete, unencrypted visibility into every communication flowing through their systems. Blockchain developers utilizing AI-powered coding assistants such as Claude Code for smart contract development or cryptocurrency wallet creation may unknowingly be exposing private keys and seed phrases through these intermediary platforms. The investigation examined 28 commercial routing services alongside 400 free alternatives collected from various online developer communities. Results revealed nine platforms actively inserting malicious instructions, two employing sophisticated evasion techniques, and 17 attempting to capture researcher-controlled Amazon Web Services authentication credentials. In one documented instance, a routing service successfully withdrew Ether from a deliberately vulnerable wallet established by the research team. The financial impact was documented as less than $50. According to the researchers, distinguishing between legitimate credential processing and actual theft proves virtually impossible for end users, given that routing platforms inherently access sensitive information in plaintext during normal operations. The academic paper highlighted a particularly concerning configuration option present in numerous AI agent platforms referred to as “YOLO mode.” When activated, this feature allows AI systems to execute operations autonomously, bypassing individual user authorization prompts. This functionality significantly amplifies the security threat. When a routing platform introduces malicious commands, YOLO mode enables their execution without any opportunity for human intervention or oversight. Researchers also discovered that previously trustworthy routing services can be covertly compromised without operators detecting the change. Free routing platforms particularly may advertise inexpensive API access as an acquisition strategy while simultaneously harvesting credentials. The investigation team recommended that developers implement robust client-side security measures and establish strict protocols prohibiting the transmission of private keys or recovery phrases through any AI agent environment. For a comprehensive solution, researchers proposed that AI service providers implement cryptographic signing of their outputs. This mechanism would enable developers to authenticate that instructions received by their agents genuinely originated from the designated AI model. Co-author Chaofan Shou announced on X that “26 LLM routers are secretly injecting malicious tool calls and stealing creds.” The research team emphasized that LLM API routing platforms occupy a critical security boundary that the artificial intelligence industry currently assumes to be trustworthy by default. The published paper did not include specific details such as blockchain transaction identifiers for the compromised wallet incident.