Block Fires 4,000 and Stock Surges 22% While Karpathy Declares the End of Traditional Programming
Jack Dorsey cut 40% of Block's workforce in the largest AI-driven layoff yet, and Wall Street rewarded it with a $6 billion market cap jump. Anthropic publicly refused Pentagon demands for mass surveillance and autonomous weapons integration. Claude Code shipped auto-memory while OpenAI and Google pushed new product capabilities.
Daily Wrap-Up
Two stories dominated the timeline today, and they sit in uncomfortable tension with each other. Jack Dorsey laid off 4,000 people at Block, roughly 40% of the company, and the stock immediately surged 22%. Every CEO in America watched that happen, and the math is now inescapable: fewer humans plus AI tools equals higher margins, and the market will reward you for acting on it. The posts analyzing this ranged from detailed financial breakdowns to gallows humor, but the consensus was clear. This is not an outlier. It is a blueprint. The fact that Block was profitable and growing when it made the cuts is what makes this moment different from previous tech layoffs.
Meanwhile, Anthropic drew a line in the sand against the Pentagon. Dario Amodei publicly refused demands to enable Claude for mass surveillance and autonomous weapons, stating the company "cannot in good conscience accede to their request." On the product side, Claude Code shipped auto-memory, a feature that lets Claude remember project context, debugging patterns, and preferred approaches across sessions. The juxtaposition is striking: an AI company voluntarily limiting its own power while simultaneously shipping features that make individual developers dramatically more capable.
The rest of the day was a blur of product launches. OpenAI showed off a restaurant voice agent built on gpt-realtime-1.5 and a Codex-to-Figma design workflow. Google dropped Gemini 3.1 Flash Image with faster generation at lower cost. Perplexity apparently one-shotted a Bloomberg Terminal replica. The pace of capability expansion is genuinely hard to track, which is exactly what @cgtwts was getting at when they begged Anthropic to take a day off so everyone could catch up. The most practical takeaway for developers: Claude Code's auto-memory feature is live now and directly applicable to your daily workflow. If you are not using persistent context across coding sessions, you are manually re-explaining things that your tools can now remember for you. Set it up today.
Quick Hits
- @JesseCohenInv posted a speculative 2036 scenario where 80% of jobs have been replaced by AI and robotics. Felt less speculative after the Block news.
- @gdb shared a podcast covering "some intense moments at OpenAI" with no further context. Classic.
- @gdb also dropped a one-liner: "always run with xhigh reasoning." Filing that under cryptic advice from OpenAI co-founders.
- @thekitze celebrated @tinkererclub hitting $333,333 in revenue in its first month, including sponsors. A third of a million in 30 days for a community product is no joke.
- @mattpocockuk argued that AI performs worse on bad codebases (garbage in, garbage out) and pointed to "deep modules," a 20-year-old software design concept, as the solution. Good reminder that code architecture matters more, not less, when AI is writing chunks of it.
Block Fires 4,000: The First Major AI Layoff Blueprint
The single biggest story today was Jack Dorsey cutting Block's workforce from 10,000 to under 6,000 in one move. This was not a struggling company trimming fat. Block's 2026 profit guidance is up 54%, gross profit is growing 18%, and earnings per share projections crushed analyst expectations. Dorsey chose to do this from a position of strength, and he said the quiet part out loud: "Intelligence tools paired with smaller teams have already changed what it means to run a company."
@aakashgupta laid out the brutal arithmetic: "The market added roughly $6 billion in market cap. That's ~$1.5 million in enterprise value created per eliminated role." He went further, contextualizing it against a wave of similar moves: "ASML cut 1,700 jobs last month while reporting record orders. Salesforce cut 5,000 after AI agents started handling 50% of customer interactions. Amazon cut 16,000 in January on top of 14,000 in October. Every one of these companies was growing when they did it."
The internal mechanics tell an important story for developers. Block's AI platform, called "Goose," started as a small engineering test tool two years ago. Now nearly every employee uses it. As @_Investinq detailed, "Engineers are shipping 40% more code per person than they were six months ago. That's the productivity gain that made 4,000 people expendable." AI fluency was built into performance reviews. If you could not keep up, you were next.
@krystalball captured the second-order effect concisely: "Block just cut 40% of their workforce because of AI and were rewarded with a massive stock surge. Other companies are going to want to recreate this." And @GodsBurnt provided the dark comedy version, tracing the whiplash timeline: companies told workers to go remote in 2020, demanded they return in 2024, then replaced them with AI in 2026. @shiri_shh put it plainly: "Jack Dorsey just laid off 4000 people in a single tweet. AI taking jobs is not a meme anymore."
The signal here is not that AI can replace jobs. Everyone knew that. The signal is that the market will actively reward companies for doing it aggressively and all at once. Dorsey explicitly chose one massive cut over gradual reductions because, in his words, gradual cuts destroy morale and trust. The restructuring charges pay for themselves in two quarters. After that, pure margin expansion. Every board in America is running this calculation tonight.
Anthropic Draws a Line: No Weapons, No Surveillance
In a move that stands in sharp contrast to the "optimize headcount at all costs" mood, Anthropic publicly refused the Pentagon's demands to enable Claude for mass surveillance and autonomous weapons. @AnthropicAI posted a link to a formal statement from CEO Dario Amodei on "discussions with the Department of War."
@cryptopunk7213 broke down the key points from Amodei's statement: "These threats do not change our position: we cannot in good conscience accede to their request." Amodei described the Pentagon's efforts to force Anthropic to enable Claude for mass surveillance and autonomous killing weapons. His response was direct: mass surveillance is not democratic, Claude is not reliable enough for autonomous weapons, and Anthropic would help the government transition to a new provider if they chose to blacklist the company. As @cryptopunk7213 put it, "fair play for sticking by their code of honor."
This is a significant moment for the AI industry. A company valued at tens of billions voluntarily walked away from what would presumably be an enormous government contract, citing both ethical principles and technical limitations. The willingness to acknowledge that their own model "isn't good enough" for certain applications is notable intellectual honesty in an industry that tends toward capability hype. Whether this position holds under sustained government pressure remains to be seen, but the public statement makes it harder to quietly reverse course later.
Claude Code Ships Auto-Memory
On the product side, Anthropic had a busy day. Claude Code 2.1.59 landed with auto-memory as the headline feature. @trq212 explained the concept: "Claude now remembers what it learns across sessions, your project context, debugging patterns, preferred approaches, and recalls it later without you having to write anything down."
@omarsar0 was brief but emphatic: "Claude Code now supports auto-memory. This is huge!" And @cgtwts captured the developer fatigue that comes with Anthropic's pace: "Someone please tell Anthropic to take a day off so the rest of us can catch up. At this point I'm still processing the previous update."
@oikon48 posted the full release notes in Japanese, covering additional improvements: better "always allow" prefix suggestions for compound bash commands, improved task list ordering, reduced memory usage in multi-agent sessions, and fixes for MCP OAuth token refresh race conditions. The compound command improvement is a quality-of-life fix that addresses a real friction point. When you run chained commands like cd /tmp && git fetch && git push, Claude Code now evaluates sub-commands individually for permission rather than treating the whole chain as one opaque block. Small change, big difference in daily workflow.
AI Products: Voice Agents, Design Workflows, and Terminal Killers
The product announcements kept coming from other players. @OpenAIDevs showed two distinct capabilities: a restaurant voice agent built on gpt-realtime-1.5, and a code-to-design-to-code workflow integrating Codex with Figma. The Figma integration is particularly interesting for frontend developers. The pitch is generating design files from code, collaborating in Figma, then implementing updates back in Codex without breaking flow. If it works as advertised, it closes a gap that has frustrated design-to-development handoffs for years.
@googleaidevs announced Nano Banana 2, which is apparently the internal name for Gemini 3.1 Flash Image. Google described it as their state-of-the-art model for image generation, offering faster speeds and lower costs with improved capabilities. The naming is delightful. The capability race in image generation continues to compress what used to require specialized tools into API calls.
Perhaps the most provocative product claim came from @zivdotcat: "Bloomberg makes ~$15B a year, ~$12B from the terminal. Bloomberg charges $30,000/yr per user for terminal access. Perplexity Computer literally one-shotted the terminal with real-time data within minutes using a single prompt." Whether "one-shotted" here means "replicated the full functionality" or "made a demo that looks similar" matters enormously, but the directional threat to entrenched information monopolies is real. Bloomberg's moat has always been data access plus specialized UI plus network effects. AI tools are chipping away at at least two of those three.
The Age of Personalized Software
@EsotericCofe posted two related updates showcasing a genuinely novel use case: using OpenClaw to generate a daily personalized news brief delivered by an AI-cloned Angela Merkel "posing as a news anchor with a heavy German accent no one understands." The technical stack is creative: OpenClaw fetches current news, then calls a Krea AI node app that uses Qwen voice clone plus Fabric to generate the video.
The implementation is absurd and funny, but the underlying point is serious. @EsotericCofe declared "the age of PERSONALIZED SOFTWARE is HERE," and they are not wrong. The barrier to creating custom media experiences has collapsed from "hire a production team" to "chain three API calls together." The fact that someone built a personalized AI news anchor as a weekend project says something about where consumer software is heading. The professional media industry should be paying attention to this, not because AI Merkel is competition, but because the tooling to create personalized content experiences is now accessible to anyone with an API key and a creative idea.
Sources
How I Structure Obsidian & Claude (Full Walkthrough)
I will run through how I structure my @obsdmd vault, as well as the other files outside of Obsidian that I use @claudeai for. My goal is to make this ...
🚀 Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B ✨ More intelligence, less compute. • Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. • Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios. • Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools 🔗 Hugging Face: https://t.co/wFMdX5pDjU 🔗 ModelScope: https://t.co/9NGXcIdCWI 🔗 Qwen3.5-Flash API: https://t.co/82ESSpaqAF Try in Qwen Chat 👇 Flash: https://t.co/UkTL3JZxIK 27B: https://t.co/haKxG4lETy 35B-A3B: https://t.co/Oc1lYSTbwh 122B-A10B: https://t.co/hBMODXmh1o Would love to hear what you build with it.
Introducing Perplexity Computer. Computer unifies every current AI capability into one system. It can research, design, code, deploy, and manage any project end-to-end. https://t.co/dZUybl6VkY
🚀 Introducing the Qwen 3.5 Medium Model Series Qwen3.5-Flash · Qwen3.5-35B-A3B · Qwen3.5-122B-A10B · Qwen3.5-27B ✨ More intelligence, less compute. • Qwen3.5-35B-A3B now surpasses Qwen3-235B-A22B-2507 and Qwen3-VL-235B-A22B — a reminder that better architecture, data quality, and RL can move intelligence forward, not just bigger parameter counts. • Qwen3.5-122B-A10B and 27B continue narrowing the gap between medium-sized and frontier models — especially in more complex agent scenarios. • Qwen3.5-Flash is the hosted production version aligned with 35B-A3B, featuring: – 1M context length by default – Official built-in tools 🔗 Hugging Face: https://t.co/wFMdX5pDjU 🔗 ModelScope: https://t.co/9NGXcIdCWI 🔗 Qwen3.5-Flash API: https://t.co/82ESSpaqAF Try in Qwen Chat 👇 Flash: https://t.co/UkTL3JZxIK 27B: https://t.co/haKxG4lETy 35B-A3B: https://t.co/Oc1lYSTbwh 122B-A10B: https://t.co/hBMODXmh1o Would love to hear what you build with it.
We've rolled out a new auto-memory feature. Claude now remembers what it learns across sessions — your project context, debugging patterns, preferred approaches — and recalls it later without you having to write anything down. https://t.co/c7PyGaukNQ
How I Structure Obsidian & Claude (Full Walkthrough)
The 7 Sins of Agentic Software
"Demos are easy. Production is hard" is the most recycled line in AI. After three years building agent infrastructure, here's the truth: Production is...
Perplexity just became the the first Al company to truly go head-to-head with the Bloomberg Terminal... Using Perplexity Computer (with no local setup or single LLM limitation), it was able to build me a terminal with real-time data to analyze $NVDA using Perplexity Finance: https://t.co/S3l5F5MRiv