AI Digest.

Block Fires 4,000 and Stock Surges 22% While Karpathy Declares the End of Traditional Programming

Jack Dorsey cut 40% of Block's workforce in the largest AI-driven layoff yet, and Wall Street rewarded it with a $6 billion market cap jump. Anthropic publicly refused Pentagon demands for mass surveillance and autonomous weapons integration. Claude Code shipped auto-memory while OpenAI and Google pushed new product capabilities.

Daily Wrap-Up

Two stories dominated the timeline today, and they sit in uncomfortable tension with each other. Jack Dorsey laid off 4,000 people at Block, roughly 40% of the company, and the stock immediately surged 22%. Every CEO in America watched that happen, and the math is now inescapable: fewer humans plus AI tools equals higher margins, and the market will reward you for acting on it. The posts analyzing this ranged from detailed financial breakdowns to gallows humor, but the consensus was clear. This is not an outlier. It is a blueprint. The fact that Block was profitable and growing when it made the cuts is what makes this moment different from previous tech layoffs.

Meanwhile, Anthropic drew a line in the sand against the Pentagon. Dario Amodei publicly refused demands to enable Claude for mass surveillance and autonomous weapons, stating the company "cannot in good conscience accede to their request." On the product side, Claude Code shipped auto-memory, a feature that lets Claude remember project context, debugging patterns, and preferred approaches across sessions. The juxtaposition is striking: an AI company voluntarily limiting its own power while simultaneously shipping features that make individual developers dramatically more capable.

The rest of the day was a blur of product launches. OpenAI showed off a restaurant voice agent built on gpt-realtime-1.5 and a Codex-to-Figma design workflow. Google dropped Gemini 3.1 Flash Image with faster generation at lower cost. Perplexity apparently one-shotted a Bloomberg Terminal replica. The pace of capability expansion is genuinely hard to track, which is exactly what @cgtwts was getting at when they begged Anthropic to take a day off so everyone could catch up. The most practical takeaway for developers: Claude Code's auto-memory feature is live now and directly applicable to your daily workflow. If you are not using persistent context across coding sessions, you are manually re-explaining things that your tools can now remember for you. Set it up today.

Quick Hits

  • @JesseCohenInv posted a speculative 2036 scenario where 80% of jobs have been replaced by AI and robotics. Felt less speculative after the Block news.
  • @gdb shared a podcast covering "some intense moments at OpenAI" with no further context. Classic.
  • @gdb also dropped a one-liner: "always run with xhigh reasoning." Filing that under cryptic advice from OpenAI co-founders.
  • @thekitze celebrated @tinkererclub hitting $333,333 in revenue in its first month, including sponsors. A third of a million in 30 days for a community product is no joke.
  • @mattpocockuk argued that AI performs worse on bad codebases (garbage in, garbage out) and pointed to "deep modules," a 20-year-old software design concept, as the solution. Good reminder that code architecture matters more, not less, when AI is writing chunks of it.

Block Fires 4,000: The First Major AI Layoff Blueprint

The single biggest story today was Jack Dorsey cutting Block's workforce from 10,000 to under 6,000 in one move. This was not a struggling company trimming fat. Block's 2026 profit guidance is up 54%, gross profit is growing 18%, and earnings per share projections crushed analyst expectations. Dorsey chose to do this from a position of strength, and he said the quiet part out loud: "Intelligence tools paired with smaller teams have already changed what it means to run a company."

@aakashgupta laid out the brutal arithmetic: "The market added roughly $6 billion in market cap. That's ~$1.5 million in enterprise value created per eliminated role." He went further, contextualizing it against a wave of similar moves: "ASML cut 1,700 jobs last month while reporting record orders. Salesforce cut 5,000 after AI agents started handling 50% of customer interactions. Amazon cut 16,000 in January on top of 14,000 in October. Every one of these companies was growing when they did it."

The internal mechanics tell an important story for developers. Block's AI platform, called "Goose," started as a small engineering test tool two years ago. Now nearly every employee uses it. As @_Investinq detailed, "Engineers are shipping 40% more code per person than they were six months ago. That's the productivity gain that made 4,000 people expendable." AI fluency was built into performance reviews. If you could not keep up, you were next.

@krystalball captured the second-order effect concisely: "Block just cut 40% of their workforce because of AI and were rewarded with a massive stock surge. Other companies are going to want to recreate this." And @GodsBurnt provided the dark comedy version, tracing the whiplash timeline: companies told workers to go remote in 2020, demanded they return in 2024, then replaced them with AI in 2026. @shiri_shh put it plainly: "Jack Dorsey just laid off 4000 people in a single tweet. AI taking jobs is not a meme anymore."

The signal here is not that AI can replace jobs. Everyone knew that. The signal is that the market will actively reward companies for doing it aggressively and all at once. Dorsey explicitly chose one massive cut over gradual reductions because, in his words, gradual cuts destroy morale and trust. The restructuring charges pay for themselves in two quarters. After that, pure margin expansion. Every board in America is running this calculation tonight.

Anthropic Draws a Line: No Weapons, No Surveillance

In a move that stands in sharp contrast to the "optimize headcount at all costs" mood, Anthropic publicly refused the Pentagon's demands to enable Claude for mass surveillance and autonomous weapons. @AnthropicAI posted a link to a formal statement from CEO Dario Amodei on "discussions with the Department of War."

@cryptopunk7213 broke down the key points from Amodei's statement: "These threats do not change our position: we cannot in good conscience accede to their request." Amodei described the Pentagon's efforts to force Anthropic to enable Claude for mass surveillance and autonomous killing weapons. His response was direct: mass surveillance is not democratic, Claude is not reliable enough for autonomous weapons, and Anthropic would help the government transition to a new provider if they chose to blacklist the company. As @cryptopunk7213 put it, "fair play for sticking by their code of honor."

This is a significant moment for the AI industry. A company valued at tens of billions voluntarily walked away from what would presumably be an enormous government contract, citing both ethical principles and technical limitations. The willingness to acknowledge that their own model "isn't good enough" for certain applications is notable intellectual honesty in an industry that tends toward capability hype. Whether this position holds under sustained government pressure remains to be seen, but the public statement makes it harder to quietly reverse course later.

Claude Code Ships Auto-Memory

On the product side, Anthropic had a busy day. Claude Code 2.1.59 landed with auto-memory as the headline feature. @trq212 explained the concept: "Claude now remembers what it learns across sessions, your project context, debugging patterns, preferred approaches, and recalls it later without you having to write anything down."

@omarsar0 was brief but emphatic: "Claude Code now supports auto-memory. This is huge!" And @cgtwts captured the developer fatigue that comes with Anthropic's pace: "Someone please tell Anthropic to take a day off so the rest of us can catch up. At this point I'm still processing the previous update."

@oikon48 posted the full release notes in Japanese, covering additional improvements: better "always allow" prefix suggestions for compound bash commands, improved task list ordering, reduced memory usage in multi-agent sessions, and fixes for MCP OAuth token refresh race conditions. The compound command improvement is a quality-of-life fix that addresses a real friction point. When you run chained commands like cd /tmp && git fetch && git push, Claude Code now evaluates sub-commands individually for permission rather than treating the whole chain as one opaque block. Small change, big difference in daily workflow.

AI Products: Voice Agents, Design Workflows, and Terminal Killers

The product announcements kept coming from other players. @OpenAIDevs showed two distinct capabilities: a restaurant voice agent built on gpt-realtime-1.5, and a code-to-design-to-code workflow integrating Codex with Figma. The Figma integration is particularly interesting for frontend developers. The pitch is generating design files from code, collaborating in Figma, then implementing updates back in Codex without breaking flow. If it works as advertised, it closes a gap that has frustrated design-to-development handoffs for years.

@googleaidevs announced Nano Banana 2, which is apparently the internal name for Gemini 3.1 Flash Image. Google described it as their state-of-the-art model for image generation, offering faster speeds and lower costs with improved capabilities. The naming is delightful. The capability race in image generation continues to compress what used to require specialized tools into API calls.

Perhaps the most provocative product claim came from @zivdotcat: "Bloomberg makes ~$15B a year, ~$12B from the terminal. Bloomberg charges $30,000/yr per user for terminal access. Perplexity Computer literally one-shotted the terminal with real-time data within minutes using a single prompt." Whether "one-shotted" here means "replicated the full functionality" or "made a demo that looks similar" matters enormously, but the directional threat to entrenched information monopolies is real. Bloomberg's moat has always been data access plus specialized UI plus network effects. AI tools are chipping away at at least two of those three.

The Age of Personalized Software

@EsotericCofe posted two related updates showcasing a genuinely novel use case: using OpenClaw to generate a daily personalized news brief delivered by an AI-cloned Angela Merkel "posing as a news anchor with a heavy German accent no one understands." The technical stack is creative: OpenClaw fetches current news, then calls a Krea AI node app that uses Qwen voice clone plus Fabric to generate the video.

The implementation is absurd and funny, but the underlying point is serious. @EsotericCofe declared "the age of PERSONALIZED SOFTWARE is HERE," and they are not wrong. The barrier to creating custom media experiences has collapsed from "hire a production team" to "chain three API calls together." The fact that someone built a personalized AI news anchor as a weekend project says something about where consumer software is heading. The professional media industry should be paying attention to this, not because AI Merkel is competition, but because the tooling to create personalized content experiences is now accessible to anyone with an API key and a creative idea.

Sources

P
Perplexity @perplexity_ai ·
Introducing Perplexity Computer. Computer unifies every current AI capability into one system. It can research, design, code, deploy, and manage any project end-to-end. https://t.co/dZUybl6VkY
J
James Bedford @jameesy ·
How I Structure Obsidian & Claude (Full Walkthrough)
G
Guri Singh @heygurisingh ·
Someone just built an AI system that runs 60+ AI agents simultaneously and they all learn from each other. It's called Claude-Flow and it's ranked #1 in agent-based frameworks on GitHub. One agent plans. Another codes. Another tests. Another reviews security. All running in parallel. All sharing memory. All getting smarter every run. The wildest part? It cuts Claude API costs by 75% using smart routing, simple tasks go to a free WebAssembly layer, complex ones to the right model. Your Claude subscription just became 2.5x more powerful. 14,100+ developers already starred it. 100% Opensource.
T
Tiago Forte @fortelabs ·
Wait, so the founder of Anthropic is "Amodei," as in "loves god"? And he leads Anthropic, meaning "human-centered," which is being used in military strikes? And the creator of ChatGPT is "Altman," as in "an alternative to humans"? And he leads OpenAI, which is completely closed? And then there's "Gemini," meaning "two-faced," from a company that promised to do no evil? And the whole global AI arms race is being driven by people who claimed to be worried about AGI taking over the world? Either the universe is an extremely cliché writer, or has a brilliant sense of humor
C
Claude @claudeai ·
It gets better with plugins, which gives Cowork domain expertise across design, engineering, operations, and more: https://t.co/2igJVv767T Also, we’re adding a new Customize tab in your Cowork sidebar. One place to manage your plugins, skills, and connectors.
A
Alex Finn @AlexFinn ·
Do you even understand what this means? An open source model just released that is: • Just as smart as Sonnet 4.5 • Incredible at coding • Can run on almost any modern computer If you have 32gb of RAM (most Mac Minis do) you can have unlimited super intelligence on your desk. For free. Sonnet 4.5 was released 5 months ago In 5 months that level of intelligence went from frontier to free on your desk And not only that, can run on any laptop with 32gb of RAM If you have the memory, do the following immediately: 1. Download LM Studio 2. Go to your OpenClaw and ask which of these new Qwen models is best for your hardware 3. Have it walk you through downloading and loading it 4. Build apps with it knowing you are using your own personal, private super intelligence on your desk The people denying this is the future are so beyond lost.
Q Qwen @Alibaba_Qwen

🚀 Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B ✨ More intelligence, less compute. • Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. • Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios. • Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools 🔗 Hugging Face: https://t.co/wFMdX5pDjU 🔗 ModelScope: https://t.co/9NGXcIdCWI 🔗 Qwen3.5-Flash API: https://t.co/82ESSpaqAF Try in Qwen Chat 👇 Flash: https://t.co/UkTL3JZxIK 27B: https://t.co/haKxG4lETy 35B-A3B: https://t.co/Oc1lYSTbwh 122B-A10B: https://t.co/hBMODXmh1o Would love to hear what you build with it.

ₕₐₘₚₜₒₙ @hamptonism ·
Perplexity just became the the first Al company to truly go head-to-head with the Bloomberg Terminal... Using Perplexity Computer (with no local setup or single LLM limitation), it was able to build me a terminal with real-time data to analyze $NVDA using Perplexity Finance: https://t.co/S3l5F5MRiv
P Perplexity @perplexity_ai

Introducing Perplexity Computer. Computer unifies every current AI capability into one system. It can research, design, code, deploy, and manage any project end-to-end. https://t.co/dZUybl6VkY

Q
Qwen @Alibaba_Qwen ·
The Qwen3.5 series maintains near-lossless accuracy under 4-bit weight and KV cache quantization. In terms of long-context efficiency: Qwen3.5-27B supports 800K+ context length Qwen3.5-35B-A3B exceeds 1M context on consumer-grade GPUs with 32GB VRAM Qwen3.5-122B-A10B supports 1M+ context length on server-grade GPUs with 80GB VRAM In addition, we have open-sourced the Qwen3.5-35B-A3B-Base model to better support research and innovation. We can't wait to see what the community builds next!
Q Qwen @Alibaba_Qwen

🚀 Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B ✨ More intelligence, less compute. • Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. • Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios. • Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools 🔗 Hugging Face: https://t.co/wFMdX5pDjU 🔗 ModelScope: https://t.co/9NGXcIdCWI 🔗 Qwen3.5-Flash API: https://t.co/82ESSpaqAF Try in Qwen Chat 👇 Flash: https://t.co/UkTL3JZxIK 27B: https://t.co/haKxG4lETy 35B-A3B: https://t.co/Oc1lYSTbwh 122B-A10B: https://t.co/hBMODXmh1o Would love to hear what you build with it.

C
CG @cgtwts ·
Someone please tell Anthropic to take a day off so the rest of us can catch up at this point i’m still processing the previous update. https://t.co/ZwdeCkAemM
T Thariq @trq212

We've rolled out a new auto-memory feature. Claude now remembers what it learns across sessions — your project context, debugging patterns, preferred approaches — and recalls it later without you having to write anything down. https://t.co/c7PyGaukNQ

ℏεsam @Hesamation ·
this Obsidian + AI is the new hot combo. few people know that the CEO of Obsidian @kepano has made multiples skills for Claude Code and Codex that you can use right now both for your codebase and your personal vault. https://t.co/pshaSsfcj6
J James Bedford @jameesy

How I Structure Obsidian & Claude (Full Walkthrough)

N
Nucleus☕️ @EsotericCofe ·
how this works: openclaw fetches current news and then calls a @krea_ai node app i created that uses qwen voice clone + fabric to create the video https://t.co/qBg4yXhztk
J
Jesse Cohen @JesseCohenInv ·
It's 2036 and 80% of jobs have been replaced by AI and robotics. https://t.co/IoziutBePJ
C
Claude @claudeai ·
New in Cowork: scheduled tasks. Claude can now complete recurring tasks at specific times automatically: a morning brief, weekly spreadsheet updates, Friday team presentations. https://t.co/7ucKZbAVip
N
Nucleus☕️ @EsotericCofe ·
now: openclaw gives me a daily personalized news brief through angela merkel posing as a news anchor with a heavy german accent no one understands the age of PERSONALIZED SOFTWARE is HERE https://t.co/X6th3CS4N0
M
Matt Pocock @mattpocockuk ·
If you throw AI at a bad codebase, you're going to get worse results. Garbage in, garbage out. And holding it together in your head will land you in cognitive debt. But these problems have a 20-year old solution: deep modules. Here's how: https://t.co/9zkEDrs2Ef
A
Andrej Karpathy @karpathy ·
It is hard to communicate how much programming has changed due to AI in the last 2 months: not gradually and over time in the "progress as usual" way, but specifically this last December. There are a number of asterisks but imo coding agents basically didn’t work before December and basically work since - the models have significantly higher quality, long-term coherence and tenacity and they can power through large and long tasks, well past enough that it is extremely disruptive to the default programming workflow. Just to give an example, over the weekend I was building a local video analysis dashboard for the cameras of my home so I wrote: “Here is the local IP and username/password of my DGX Spark. Log in, set up ssh keys, set up vLLM, download and bench Qwen3-VL, set up a server endpoint to inference videos, a basic web ui dashboard, test everything, set it up with systemd, record memory notes for yourself and write up a markdown report for me”. The agent went off for ~30 minutes, ran into multiple issues, researched solutions online, resolved them one by one, wrote the code, tested it, debugged it, set up the services, and came back with the report and it was just done. I didn’t touch anything. All of this could easily have been a weekend project just 3 months ago but today it’s something you kick off and forget about for 30 minutes. As a result, programming is becoming unrecognizable. You’re not typing computer code into an editor like the way things were since computers were invented, that era is over. You're spinning up AI agents, giving them tasks *in English* and managing and reviewing their work in parallel. The biggest prize is in figuring out how you can keep ascending the layers of abstraction to set up long-running orchestrator Claws with all of the right tools, memory and instructions that productively manage multiple parallel Code instances for you. The leverage achievable via top tier "agentic engineering" feels very high right now. It’s not perfect, it needs high-level direction, judgement, taste, oversight, iteration and hints and ideas. It works a lot better in some scenarios than others (e.g. especially for tasks that are well-specified and where you can verify/test functionality). The key is to build intuition to decompose the task just right to hand off the parts that work and help out around the edges. But imo, this is nowhere near "business as usual" time in software.
A
Ashpreet Bedi @ashpreetbedi ·
The 7 Sins of Agentic Software
D
dev @zivdotcat ·
Bloomberg makes ~$15B a year, ~$12B from the terminal. Bloomberg charges $30000/yr per user for terminal access. Perplexity Computer literally one-shotted the terminal with real-time data within minutes using a single prompt. https://t.co/qFIRUw71mZ
ₕₐₘₚₜₒₙ @hamptonism

Perplexity just became the the first Al company to truly go head-to-head with the Bloomberg Terminal... Using Perplexity Computer (with no local setup or single LLM limitation), it was able to build me a terminal with real-time data to analyze $NVDA using Perplexity Finance: https://t.co/S3l5F5MRiv

A
Anthropic @AnthropicAI ·
In November, we outlined our approach to deprecating and preserving older Claude models. We noted we were exploring keeping certain models available to the public post-retirement, and giving past models a way to pursue their interests. With Claude Opus 3, we’re doing both.