Radar Trends to Watch: February 2026 – O’Reilly

If you wanted any evidence that AI had colonized just about every aspect of computing, this month’s Trends would be all you need. The Programming section is largely about AI-assisted programming (or whatever you want to call it). AI also claims significant space in Security, Operations, Design, and (of course) Things. AI in the physical world takes many different forms, ranging from desktop robots to automated laboratories. AI’s colonization is nothing new, but visionary tools like Steve Yegge’s Gas Town make it clear how quickly the world is changing.

AI

  • Google has released Genie 3 to subscribers of Google AI Ultra. Genie is a “world model”: an interactive 3D real-time video generator that produces interactive worlds from prompts and lets you walk or fly through those worlds to explore them.
  • Kimi K2.5 is a new open source model from Moonshot AI. It’s natively multimodal and designed to facilitate swarms of up to 100 subagents, starting and orchestrating the subagents on its own.
  • Qwen has released its latest model, Qwen-3-Max-Thinking. It claims performance equivalent to other thinking models, including Claude Opus 4.5 and Gemini 3. It includes features like adaptive tool use and test-time scaling.
  • The MCP project has announced that the MCP Apps specification is now an official extension to MCP. The Apps spec defines a standard way for MCP servers to return user interface components, from which clients can build complex user interfaces.
  • Now your agents have their own social network. Meet Moltbook: It’s a social network for OpenClaw (or is it MoltBot) to share its thoughts. Humans are welcome to observe and see what agents have to say to each other.
  • OpenClaw (formerly MoltBot, formerly ClawdBot) gives LLMs persistence and memory in a way that allows any computer to serve as an always-on agent carrying out your instructions. The memory and personal details are stored locally. You can run popular models remotely through APIs locally if you have enough hardware. You communicate with it using any of the popular messaging tools (WhatsApp, Telegram, and so on), so it can be used remotely.
  • FlashWorld is a new video model that can generate 3D scenes from text prompts or 2D images in seconds. There are other models that can generate 3D scenes, but FlashWorld represents a huge advance in speed and efficiency.
  • When creating a knowledge base, use negative examples and decision trees to build AI systems that know when to say “No.” The ability to say “No” is as important as the ability to solve a user’s problem.
  • Anthropic has published a “constitution” for Claude’s training. It’s a detailed description of how Claude is intended to behave and the values it reflects. The constitution isn’t just a list of rules; it’s intended to help Claude reason about its behaviors. “Why” is important.
  • OpenAI is experimenting with ads on ChatGPT, along with introducing a new low-cost ads-included subscription (ChatGPT Go, at US$8). They claim that ads will have no effect on ChatGPT answers and that users’ conversations will be kept private from advertisers.
  • OpenAI has also published its OpenResponses API, which standardizes the way clients (including agents) make API requests and receive responses. It’s an important step toward interoperable AI.
  • Anthropic has launched Cowork, a version of Claude Code that has been adapted for general purpose computing. One thing to watch out for: Cowork is vulnerable to an indirect prompt injection attack that allows attackers to steal users’ files.
  • Kaggle has announced community benchmarks, a feature that allows users to create, publish, and share their own benchmarks for AI performance. You can use this service to find benchmarks that are appropriate to your specific application.
  • Prompt engineering isn’t dead yet! Researchers at Google have discovered that, when using a nonreasoning model, simply repeating the prompt yields a significant increase in accuracy.
  • Moxie Marlinspike, creator of Signal, is building Confer, an AI assistant that preserves users’ privacy. There’s no data collection, just a conversation between you and the LLM.
  • Google says that “content chunking”—breaking web content into small chunks to make it more likely to be referenced by generative AI—doesn’t work and harms SEO. The company recommends building websites for humans, not for AI.
  • Claude for Healthcare and OpenAI for Healthcare are both HIPAA-compliant products that attempt to smooth the path between practitioners and patients. They’re not concerned with diagnosis as much as they are with workflows for medical professionals.
  • Nightshade is a tool to help artists prevent their work from being used to train AI. Its authors describe it as an offensive tool: Images are distorted in ways that humans can’t perceive but that make the image appear to be something different to an AI, ruining it for training purposes.
  • An analysis of 1,250 interviews about AI use at work shows that artists (creatives) are most conflicted about the use of AI but also the fastest adopters. Scientists are the least conflicted but are adopting AI relatively slowly.
  • Weird generalization? Fine-tuning a model on 19th century bird names can cause the model to behave as if it’s from the 19th century in other contexts. Narrow fine-tuning can lead to unpredictable generalization in other contexts, and possibly data poisoning vulnerabilities.

Programming

  • In an experiment with autonomous coding, a group at Cursor used hundreds of agents working simultaneously to build a web browser in one week.
  • AI-assisted programming is about relocating rigor and discipline rather than abandoning them. Excellent points by Chad Fowler.
  • The AI Usage Policy for ghostty is worth reading. While strict, it points out that the use of AI is welcome. The project has a problem with unqualified humans using AI—in other words, with “the people, not the tools.”
  • In the age of AI, what’s a software engineer’s most important skill? Communications—coupled with other so-called “soft skills.”
  • You can practice your command line basics with the Unix Pipe Card Game. It’s also a great teaching tool. Command line mastery is becoming rare.
  • The cURL project is eliminating bug bounties in an attempt to minimize AI slop and bad bug reports.
  • NanoLang is a new programming language that’s designed for LLMs to generate. It has “mandatory testing and unambiguous syntax.” Simon Willison notes that it combines elements of C, Lisp, and Rust.
  • Is bash all an agent needs? While tools designed for agents proliferate, there’s a good argument that basic Unix tools are all agents need to solve most problems. You don’t need to reinvent grep. You need to let agents perform complex tasks using simple components.
  • Gleam is a new programming language that runs on the Erlang virtual machine (BEAM). Like Erlang, it’s designed for massive concurrency.
  • The speed at which you write or generate code is much less important than the bottlenecks in the process between software development and the customer.
  • Simon Willison’s post about the ethics of using AI to port open source software to different languages is a must-read.
  • Language models appear to prefer Python when they generate source code. But is that the best option? What would it mean to have a “seed bank” for code so that AIs can be trained on code that’s known to be trustworthy?
  • Is it a time for building walls? Are open APIs a thing of the past? Tomasz Tunguz sees an increasing number of restrictions and limitations on formerly open APIs.
  • A software library without code? Drew Breunig experiments with whenwords, a library that is just a specification. The specification can then be converted into a working library in any common programming language by any LLM.
  • Steve Yegge’s Gas Town deserves more than a look. It’s a multi-agent orchestration framework that goes far beyond anything I’ve seen. Is this the future of programming? A “good piece of speculative design fiction that asks provocative questions” about agentic coding? We’ll find out in the coming year.
  • Pyodide and Wasm let you run Python in the browser. Here’s an example.
  • Gergely Orosz argues that code review tools don’t make sense for AI-generated code. It’s important to know the prompt and what code was edited by a human.
  • Kent Beck argues that AI makes junior developers more useful, not expendable. It prevents them from spending time on solutions that don’t work out, helping them learn faster. Kent calls this “augmented coding” and contrasts it with “vibe coding,” where AI’s output is uncritically accepted.

Security and Privacy

  • Researchers have discovered a new attack against ChatGPT that can exfiltrate users’ private information without leaving any signs of its activity on the victim’s machines. This attack is yet another variant of prompt injection. Other models are probably vulnerable to similar attacks.
  • Sandboxes for AI: Can you ensure that AI-generated code won’t misbehave? Building an effective sandbox limits the damage they can do.
  • AI Mode on Google Search can now access your photos and email to give you more personalized results. According to Google, Personal Intelligence is strictly opt-in; photos and email won’t be used for training models, though prompts and responses will.
  • Fine-tuning an AI can have unexpected consequences. An AI that’s trained to generate bad code will also generate misleading, incorrect, or deceptive responses on other tasks. More generally, training an AI to misbehave on one task will cause it to misbehave on others.
  • California’s new privacy protection law, DROP, is now in effect. Under this law, California residents who want data deleted make a request to a single government agency, which then relays the request to all data brokers.
  • Is SSL dangerous? It’s a technology that you only build experience with when something goes wrong; when something goes wrong, the blast radius is 100%; and automation both minimizes human touch and makes certain kinds of errors more likely.
  • Here’s an explanation of the MongoBleed attack that had almost all MondoDB users rushing to update their software.
  • Anyone interested in security should be aware of the top trends in phishing.
  • Google is shutting down its dark web report, a tool that notified users if their data was circulating on the “dark web.” While this sounds like (and may be) drastic, the stated reason is that there’s little that a user can do about data on the dark web.
  • Microsoft is finally retiring RC4, a stream cipher from the 1980s with a known vulnerability that was discovered after the algorithm was leaked. RC4 was widely used in its day (including in web staples like SSL and TLS) but was largely abandoned a decade ago.

Operations

  • AI is stress-testing business models. Value is moving up the stack—to operations. One thing you can’t prompt an AI to do is guarantee four or five nines uptime.
  • How to make your DNS more resilient and avoid outages: Some excellent ideas from Adrian Cockroft.
  • Kubernetes 1.35 (aka “Timbernetes”) supports vertical scaling: adjusting CPU and memory dynamically, without restarting Pods.

Things

  • Google was the first to build (and fail) with smart glasses targeting consumers. They’re trying again. Will they succeed this time? Meta’s Ray-Ban-based product has had some success. Is it time for XR yet?
  • NVIDIA has announced the Vera Rubin series of GPU chips. It claims the new series is five times more efficient than its previous chips.
  • An AI-driven vending machine was installed at the Wall Street Journal offices. Reporters soon tricked it into giving away all of its stock and got it to order things like a PlayStation 5 and a live fish. (It can order new stock.)
  • DeepMind is building an automated material science laboratory. While the research will be directed by humans, the lab is deeply integrated with Google’s Gemini model and will use robots to synthesize and test new materials.

Design

  • Despite the almost constant discussion of AI, design for AI is being left out. “Design is the discipline of learning from humans, understanding what they actually need rather than what they say they want.”
  • What does a design project deliver? Luke Wroblewski argues that, with AI, a design project isn’t just about delivering a “design”; it can also include delivering AI tools that allow the client to generate their own design assets.
  • Good design is about understanding the people on both sides of the product: users and developers alike. Designers need to understand why users get frustrated too.

Leave a Comment