• WKND AI
  • Posts
  • Claude's OpenClaw Killer

Claude's OpenClaw Killer

+NVIDIA Launches NemoClaw

In partnership with

Turn AI Into Extra Income

You don’t need to be a coder to make AI work for you. Subscribe to Mindstream and get 200+ proven ideas showing how real people are using ChatGPT, Midjourney, and other tools to earn on the side.

From small wins to full-on ventures, this guide helps you turn AI skills into real results, without the overwhelm.

Hello WKND AI Warriors!

Anthropic launched Claude Code Channels, giving developers scoped AI workspaces instead of one agent roaming across everything.

Also, Google is expanding Search to pull answers from your Gmail, Docs, Calendar, and Photos for more personalized results.

Plus, Nvidia unveiled NemoClaw, an open platform to standardize how companies build and run AI agents.

Oh yeah, and a new study of 81,000 AI-assisted interviews shows workers are becoming “AI editors” as tools like Claude reshape daily work.

Today’s newsletter includes:

  •  AI NEWS RECAP

  • 🤿 AI DEEP DIVE

📰 AI NEWS RECAP

Claude's OpenClaw Killer

Claude Code just got a new feature.

It looks small.
It changes how people will actually use AI.

Anthropic launched something called Code Channels.

You can now message your AI agent through Telegram or Discord.
It runs in the background.
It does the work.
It replies when it is done.

That sounds simple.

It is not.

Until now, most AI tools worked like this:

You sit down.
You ask.
You wait.
You guide it step by step.

Now the model shifts.

You send a task.
You walk away.
It works without you.

What used to feel like a tool
starts to feel like a worker.

This is why people are calling it an “OpenClaw killer.”

OpenClaw showed what people actually wanted:

• a persistent agent
• reachable from your phone
• able to act, not just respond
• running even when you are not there

Anthropic did something predictable.

They watched the behavior.
Then built the cleaner version.

Same idea.
Less friction.
More control.
More security.

That pattern repeats everywhere in technology.

Open systems explore.
Closed systems package.

Less “use AI when you sit down.”
More “assign work and come back later.”

And once that becomes normal,
people will start asking a different question.

Not “What can AI help me with?”

But

“What should I hand off entirely?”

Anthropic published results from 81,000 interviews it ran with professionals using its Interviewer tool, showing AI is already embedded in day‑to‑day knowledge work. People reported using Claude heavily for drafting, research, and code, but also described new pain points like over‑reliance, hallucinations, and having to become “AI editors” rather than pure creators.

Google is expanding Personal Intelligence in Search so more users can opt in to answers that draw from their Gmail, Docs, Calendar, and Photos. That means you can ask things like “What do I need to prepare for my trip?” or “When did I last meet with this client?” and get responses that mix web results with your private Google data in one place.

Manus announced “My Computer,” a desktop‑like experience where its agent can see windows, files, and apps and take actions for you across the whole machine. Instead of pasting in snippets, you give it goals (“clean up these folders,” “update this deck”) and watch it click, type, and edit inside your actual OS environment.

Mistral introduced Forge, a hosted platform for running and orchestrating its models with built‑in tools for evaluation, monitoring, and fine‑tuning. It’s aimed at companies that want to build production apps on Mistral without stitching together their own infrastructure from raw APIs and open‑source components.

Nvidia officially announced NemoClaw, an open agent platform that sits on top of its NeMo stack and is meant to standardize how enterprises build and run AI agents. The idea is to give developers a common toolkit for planning, tool‑calling, and multi‑agent coordination, while of course nudging those workloads toward Nvidia hardware.

Chinese startup MiniMax released its new M2.7 model, a proprietary system billed as “self‑evolving” because it can run automatic evaluation and self‑improvement loops on its own behaviors. Early benchmarks suggest it performs 30–50 percent better than the company’s previous models on coding, reasoning, and multimodal tasks, as China’s domestic stack keeps racing to close the gap with Western frontier models.

🤿 AI DEEP DIVE

Can’t keep up with Claude?

Catch up here!

How'd you like this newsletter?

Love it or hate it? Let us know why!

Login or Subscribe to participate in polls.

How can you help?

Refer my newsletter to help others learn AI.

Missed last week’s edition?