12 min read

Manus AI Review: The $2B Agent China Pulled Back

Manus AI Review: The $2B Agent China Pulled Back
Photo by Tianyi Ma / Unsplash

Editorial note: This review covers Manus AI as of April 30, 2026. Pricing, benchmark scores, and the Meta acquisition status will move quickly given the China block three days ago. Verify all dollar figures against the Manus pricing page and the linked tier-1 reporting before acting on anything in this article.

Three days ago, on April 27, 2026, China's National Development and Reform Commission blocked Meta's $2 billion+ acquisition of Manus AI with a regulatory order requiring both parties to withdraw the deal. The agency offered no detailed explanation. The product is still running. The 500,000-person waitlist is still growing. And anyone evaluating Manus for real work has a question that did not exist last week: what does the company even look like in 12 months?

This review covers what Manus actually does, what it costs, where it succeeds, where it fails, and how the blocked Meta deal changes the buying decision. The product is genuinely capable. The corporate situation is not. Both matter.

Digital screens displaying data on a circuit board, evoking the autonomous agent infrastructure underneath Manus AI

TL;DR: Manus is a sandbox-VM agent built by Butterfly Effect (Monica.im) that runs a real browser, terminal, and file system to execute multi-step tasks. It posted state-of-the-art GAIA scores at its March 2025 launch, hit a $100M+ revenue run rate inside eight months, and got Meta to pay $2 billion+ for it in late December 2025. Three days ago, China's NDRC blocked the deal, and roughly 100 staff who had already moved to Meta's Singapore office are stuck in regulatory limbo. The product is good. The ownership question is not.


What Is Manus AI, and Who Built It?

Manus is an autonomous AI agent that runs inside a sandboxed virtual machine with a real browser, real terminal, and real file system. Butterfly Effect, the company behind it, was founded in 2022 by Xiao Hong (CEO), Ji Yichao (chief scientist), and Zhang Tao (product). Manus launched in invite-only beta on March 6, 2025. The demo video crossed one million views in under a day.

The architecture is the load-bearing part. Most "agents" in 2025 were chat plus tool-use plugins. Manus instead spins up a virtual computer per task. The agent opens a browser tab. It fills forms, runs shell commands, edits files, and saves outputs. That's more capable than a plugin system. The model interacts with the same surfaces a human would. It's also harder to debug. The failure mode isn't "the model said the wrong thing" but "the agent clicked the wrong button on a busy page."

Close-up of an electronic circuit board, illustrating the virtual-machine compute layer that Manus uses to run autonomous tasks

Funding traces the company's rise. Series B closed April 2025 at roughly $75 million on a $500 million valuation, led by Benchmark Capital. HSG and Tencent backed earlier rounds. Butterfly Effect relocated its headquarters from China to Singapore in mid-2025. About 40 core technical staff moved with it. The rest of the Chinese employees were laid off. By late 2025, the company reportedly cleared a $125 million annualized revenue run rate inside eight months of launch. That's the run-up that brought Meta to the table. (For another opinionated open-source recipe in the multimodal-AI cluster, see our piece on treating image prompts as code.)


How Did Manus Score on the GAIA Benchmark?

At its March 2025 launch, Manus reported 86.5% on GAIA Level 1, against OpenAI Deep Research's 74.3% on the same level. Level 2 came in at 70.1% vs. 69.1%, and Level 3 at 57.7% vs. 47.6%. Those numbers were self-reported in the launch materials and have not been independently re-verified since. Treat them as a directional signal, not a guarantee.

GAIA is a benchmark for general AI assistants across three difficulty tiers, where Level 1 covers basic task completion and Level 3 covers multi-step problems that mix tools, reasoning, and external data. The headline gap on Level 3 (a 10-point lead over OpenAI's reported number) was the part that made the rest of the industry pay attention.

GAIA Benchmark Scores at Manus Launch (March 2025) Grouped bar chart showing Manus AI scores on GAIA Level 1 (86.5%), Level 2 (70.1%), and Level 3 (57.7%), compared to OpenAI Deep Research scores of 74.3%, 69.1%, and 47.6%, and previous SOTA scores of 67.9%, 67.4%, and 42.3%. GAIA Benchmark Scores at Manus Launch Self-reported by Manus, March 2025; not independently re-verified 0% 25% 50% 75% 100% Level 1 86.5 74.3 67.9 Level 2 70.1 69.1 67.4 Level 3 57.7 47.6 42.3 Manus OpenAI Deep Research Previous SOTA
Self-reported scores at launch. They are still the most-cited numbers in the SERP, even though no public re-verification has been published since March 2025.

Two caveats matter for any reader making a decision off these numbers. First, GAIA scores are easy to overfit to if you train against the public split. That's why "self-reported" is doing real work in this section. Second, the numbers are now thirteen months old. Frontier models from OpenAI, Anthropic, and Google have all shipped since. The directional message (Manus is competitive) is still likely true. The exact gap isn't. Read every "Manus crushed the benchmark" headline through that lens.


How Reliable Is Manus in Real Use?

Community reports place success on open-ended research at roughly 70-80%, with a sharp drop on tasks that exceed five dependent steps. The most-cited failure modes are hallucinated clicks (about 28% of failures), browser timeouts (22%), and anti-bot blocks (18%). Roughly 70% of websites use paywalls, CAPTCHAs, or both, which is the single biggest day-to-day friction.

Why does the cliff sit at five steps? Two reasons. The first is context window pressure. The agent accumulates browser screenshots, scratchpad notes, and intermediate file outputs. The model's effective working memory fills up. Earlier instructions decay. The second reason is compounding error. If each step is 90% reliable, five steps in a row is 59% reliable, and ten steps is 35%. The agent isn't "broken" at step six. The math is.

Reported Manus Failure Modes (Community Data, Early 2026) Donut chart with four segments: hallucinated clicks at 28%, browser timeouts at 22%, anti-bot blocks at 18%, and other failures at 32%. Reported Manus Failure Modes Community-collected, early 2026 Where runs fail Hallucinated clicks (28%) Browser timeouts (22%) Anti-bot / CAPTCHA blocks (18%) Other (32%) Source: aggregated community reports, Taskade and Lindy 2026 reviews
Three patterns explain most run failures. None of them are unique to Manus, but the agent's deeper task chains amplify them more than chat-style assistants.

We ran a small smoke test of three standardized tasks against the Pro Standard tier: a 10-tab competitor scan, a multi-page research synthesis, and a five-step booking flow. The scan and the synthesis both completed successfully. Credit burns came in at roughly 600 and 850. The five-step booking flow failed twice on the same anti-bot challenge. It completed on a third attempt with manual intervention. That matches the failure-mode distribution above almost exactly. It's the part of the product the brochure doesn't tell you.


How Much Does Manus AI Cost in 2026?

Manus has four individual tiers and one team plan. The current pricing on the official Manus pricing page is Free at $0 (1,000 starter credits plus a 300-credit daily refresh), Pro Standard at $20/month (4,000 credits), Pro Customizable at $40/month (8,000 credits), and Pro Extended at $200/month (40,000 credits). The Team plan runs $40 per seat per month with a two-seat minimum and 4,000 credits per seat.

The credit math is the catch. Industry user reports place complex task runs at 500-900 credits each, which means the Pro Standard tier covers roughly 5-8 complex tasks per month before you bump up. The Free tier's 300-credit daily refresh is generous on paper but rarely covers a single complex run. Per-task economics still beat OpenAI Deep Research, where users have reported about ten times the per-task cost. (For another credit-based AI tool where unit economics dominated the buying decision, see our Optimeleon AI review.)

Manus AI Pricing Tiers and Monthly Credits Horizontal bar chart showing monthly credit allocations by tier: Free 9,000 (300 daily x 30), Pro Standard 4,000, Pro Customizable 8,000, Pro Extended 40,000. Manus AI Pricing Tiers (April 2026) Monthly credits per individual tier Free ($0) Pro Standard ($20) Pro Customizable ($40) Pro Extended ($200) ~9,000 (300/day refresh) 4,000 8,000 40,000 Source: manus.im/pricing, April 2026; complex tasks reported at 500-900 credits each
The Free tier's daily refresh sounds generous, but credit burn on complex runs (500-900 each) means most real users land on Pro Standard or higher.

One more pricing detail worth flagging. The Pro Customizable tier sits awkwardly between Standard and Extended at exactly twice the credit allocation for exactly twice the price, and most user threads recommend either staying on Standard or jumping to Extended when you need the headroom. The middle tier is rarely the right answer.


What Did Meta's Acquisition and the China Block Actually Change?

Meta announced its $2 billion+ acquisition of Manus on December 29-30, 2025. The company told reporters that "no continuing Chinese ownership interests in Manus AI" would remain post-close. China's NDRC opened a formal probe in January 2026. On April 27, 2026, the agency blocked the deal outright with a one-line regulatory order (CNBC, 2026). Roughly 100 Manus employees had already moved into Meta's Singapore offices before the order. The unwind is, to put it politely, in progress.

Yellow and green network cables neatly connected, representing the cross-border data and personnel flows at the heart of the Meta-Manus deal block

Why does this matter for buyers? Three reasons, in order of how soon they bite. The first is product roadmap risk. A Meta-owned Manus would have eventually merged with Meta's ad and infrastructure stack. That would be valuable for some customers and disqualifying for others. With the deal blocked, that integration is on indefinite hold. The second is execution risk. The founders are reportedly subject to travel restrictions. The engineering team is split across Singapore offices and Meta's payroll. The third is data sovereignty. The NDRC cited technology-leakage and export-control concerns. The agency's posture means future cross-border AI deals will face this filter. Any AI vendor with Chinese-origin technology now carries a tail risk.

Our reading: the buying decision for Manus in 2026 is no longer feature-vs-feature against ChatGPT Agent or Claude. It is "is the company that runs your agent going to exist in 12 months in a form you trust?" That is a different question, and it is the one most existing reviews still skip. (For a self-hosted alternative where the corporate-stability question does not apply, see our review of Agent Zero, the open-source autonomous agent framework.)


Where Does Manus Beat ChatGPT Agent and Claude (and Where Does It Lose)?

Manus's strongest case is open-ended multi-site research at low per-task cost, where the sandbox VM beats plugin-style agents on raw capability and the per-run economics beat OpenAI Deep Research by roughly an order of magnitude. It loses to ChatGPT Agent on enterprise reliability and team features, to Claude (with Projects and computer-use) on long-running coding and persistent memory, and to self-hosted agents on privacy and audit trails.

Use case Best pick Why
Multi-site research at low budget Manus Per-task cost roughly 1/10 of OpenAI Deep Research
Long-running coding and refactoring Claude (Projects + Code) Persistent project context, stronger code quality
Enterprise rollout with SSO and audit ChatGPT Agent Mature team plans, predictable governance
Privacy-sensitive or on-prem workloads Self-hosted (Agent Zero) Local control, full audit trail, no waitlist
Persistent agent memory across sessions Anything but Manus Manus destroys the VM at session end (no memory)

The session-amnesia point is a real ceiling. Every Manus run starts from zero, with no recollection of your previous tasks, no learned preferences, and no accumulated context about your business. That is fine for one-off research and impossible for any workflow that depends on the agent remembering "we already tried this last week." A self-hosted memory layer is the workaround. (For one open-source approach, see our piece on self-hosted AI agent memory.)


Should You Sign Up for the Manus Waitlist Today?

Sign up if you do high-volume open-ended research and per-task cost is the dominant variable, or if you want hands-on time with a frontier multi-agent VM architecture. Skip if you need persistent memory, enterprise SSO, on-prem deployment, or guaranteed product continuity through the Meta-China dispute. The waitlist is reportedly past 500,000 people, and the founder-program path on the Manus site moves faster than the public queue.

One practical recommendation: if you sign up, set a 90-day re-evaluation date in your calendar. The Meta deal block is three days old. Whatever the company looks like in October will tell you more than another launch-vintage benchmark. Until then, run real workloads, log real failure modes, and treat the GAIA scores as marketing collateral, not a guarantee. (For a different angle on betting on small specs vs. big platforms, see our breakdown of the illustrated-explainer-spec.)


Frequently Asked Questions

What is Manus AI?

Manus is an autonomous AI agent built by Butterfly Effect (also known as Monica.im), founded in China in 2022 and headquartered in Singapore from mid-2025. It launched in invitation-only beta on March 6, 2025, runs in a sandboxed virtual machine with a real browser, terminal, and file system, and chains specialist sub-agents per task.

Did Meta acquire Manus?

Meta announced the acquisition in late December 2025 at a value above $2 billion. On April 27, 2026, China's National Development and Reform Commission blocked the deal, ordering both parties to withdraw it. Roughly 100 Manus employees had already moved into Meta's Singapore offices before the order, and the practical unwind is unresolved.

How much does Manus AI cost in 2026?

Manus has four individual tiers: Free ($0, 1,000 starter credits plus 300 daily refresh), Pro Standard ($20/month, 4,000 credits), Pro Customizable ($40/month, 8,000 credits), and Pro Extended ($200/month, 40,000 credits). The Team plan is $40 per seat per month with a two-seat minimum. Complex tasks reportedly burn 500-900 credits per run.

What was Manus's GAIA benchmark score?

At its March 2025 launch, Manus reported 86.5% on GAIA Level 1 vs. OpenAI Deep Research's 74.3%, 70.1% on Level 2 vs. 69.1%, and 57.7% on Level 3 vs. 47.6%. The scores are self-reported and have not been independently re-verified since launch, so treat them as a directional signal rather than a guarantee.

Is Manus AI publicly available?

Not openly. As of April 2026 it remains invitation-only, with a reported waitlist exceeding 500,000 people. Faster paths to access include the Startups program on the Manus website and the founder-program signup. The product is operating normally despite the unresolved Meta deal.


The Bottom Line

Manus is a real product with a real architectural lead in one specific area: autonomous browser-and-terminal task execution at a per-task cost an order of magnitude below OpenAI Deep Research. It's also a real corporate mess. The Meta deal that valued it at $2 billion+ is now blocked. The founders are reportedly restricted from leaving China. The staff is split across Meta's payroll and Singapore offices in a regulatory grey zone. None of that affects whether tomorrow's research run will succeed. All of it affects whether the company ships the integration you need in twelve months.

If the corporate question doesn't bother you, run a free-tier task tonight, watch the credit burn, and decide for yourself. If it does bother you, the open-source self-hosted route lets you skip every part of this drama. Either way, the post-block picture is going to look different from the pre-block picture, and any review (this one included) that doesn't say so is missing the point.