AI Digest.

Anthropic Blacklisted Over Pentagon Surveillance Refusal as Qwen 3.5 Democratizes Local Inference

The biggest AI story of the year erupted as Anthropic was designated a "supply chain risk" for refusing Pentagon mass surveillance demands, only for OpenAI to swoop in with an identical safety framework. Meanwhile, Qwen 3.5's small-but-mighty models proved consumer GPUs can run frontier-grade coding agents, and Claude Code shipped new built-in skills for automated code review.

Daily Wrap-Up

Today was dominated by a single story that will define the AI industry's relationship with government for years to come. Anthropic refused to let Claude be used for mass surveillance or autonomous weapons, got blacklisted by the Pentagon with the same designation they gave Huawei, and then watched OpenAI sign a deal with identical safety terms hours later. The speed of events was staggering, and the implications for every company building on Claude are real. Whether you view Anthropic as principled or naive depends on your priors, but the fact that the Pentagon accepted the same red lines from a competitor makes the designation look more like retaliation than policy.

Away from the geopolitics, today reinforced a trend that keeps accelerating: local inference on consumer hardware is becoming genuinely viable for serious coding work. Qwen 3.5's 35B-A3B model running at 112 tokens per second on a single RTX 3090 is not a toy demo. People are building complete multi-file applications with procedural audio, particle systems, and boss fights in single prompts. The economics of Apple Silicon for memory-bound inference continue to embarrass NVIDIA's pricing in the personal computing segment. If you've been waiting for "good enough" local models to arrive, the wait is over.

The most entertaining moment was easily @NoahKingJr's take on the Iran situation: "Trump: Hey Siri, tell me how many miles I ran today. Siri: ok, sending missiles to Iran today." Dark humor for dark times. The most practical takeaway for developers: install Qwen 3.5-35B-A3B locally and point Claude Code or OpenCode at it via llama.cpp's Anthropic endpoint. You get Sonnet 4.5-grade coding ability on a $800 used GPU with zero API costs, and the open source harnesses are now reliable enough for sustained multi-file agent sessions.

Quick Hits

  • @EHuanglu shares that AI animation can now be keyframed per-second using text prompts, a significant step toward production-ready AI video tools.
  • @neural_avb drops a framework for building agentic systems, adding to the growing pile of agent orchestration options.
  • @doodlestein claims to be running AI-assisted development "at scale now for a massive number of projects" with the right tooling and workflows.
  • @sukh_saroy highlights the Financial Datasets MCP Server, giving Claude access to live stock prices, financial statements, and crypto data. Wall Street terminal functionality for free.
  • @theallinpod covers Claude's "hit list" of SaaS companies, the datacenter opposition movement, and SCOTUS striking down tariffs. They note the Anthropic/DoW fallout happened after recording and will be covered next week.
  • @michaeljburry launches a new series comparing historical newspaper coverage to today's AI hype, drawing parallels that should make boosters uncomfortable.
  • @morganlinton flags a "must read" from the founder of Cursor, though no details on the content.
  • @affaanmustafa streams a YC Browser Use Hackathon, continuing Y Combinator's heavy investment in browser automation agents.
  • @Full_Metal_QR suggests Anthropic should "just hire this little guy," context unclear but the sentiment resonates.

Anthropic vs. The Pentagon: AI's Biggest Political Crisis

The dominant story across the feed today was the collision between Anthropic and the U.S. Department of War, a sequence of events so compressed and consequential that @cryptopunk7213 called it "the fucking wildest 7 days in U.S. defense history." The core facts: Anthropic drew two hard lines on their Pentagon contract (no mass surveillance of Americans, no autonomous lethal weapons without human oversight), the Pentagon demanded those lines be removed, Anthropic refused, and the administration designated them a "supply chain risk" using the same framework applied to Huawei.

The deepest analysis came from @shanaka86, who surfaced a detail from Axios that changes the calculus entirely:

> "While Anthropic was being blacklisted for refusing to allow mass surveillance, the Pentagon's own 'compromise deal'... would have required Anthropic to allow the collection and analysis of Americans' geolocation data, web browsing history, and personal financial information purchased from data brokers."

This is not an abstract policy dispute. The contract language reportedly asked for access to location tracking, browsing history, and financial records of American citizens. Anthropic said no. Then, as @tedlieu pointed out with genuine bewilderment: "The Department of Defense just agreed to the same two conditions with OpenAI that Anthropic was asking for. Can someone explain? I genuinely don't understand."

Hours after the blacklisting, @sama announced OpenAI's deal with the DoW, carefully noting that "two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force" and that "the DoW agrees with these principles." To OpenAI's credit, they also publicly pushed back on the designation itself, with @OpenAI stating: "We do not think Anthropic should be designated as a supply chain risk and we've made our position on this clear to the Department of War."

But as @markgadala noted, the optics are brutal: "Just a few hours ago he was on TV saying he stood by Anthropic. Then he undercuts them and takes the same contract that Anthropic just lost." The practical fallout extends far beyond the $200M Pentagon contract. @shanaka86 calculates that eight of the ten largest American companies use Claude, and the supply chain designation forces every general counsel with Pentagon exposure to reassess. Anthropic's expected $380B IPO is effectively frozen. @AnthropicAI has announced they will take the administration to court.

Local AI Hits an Inflection Point

Qwen 3.5's release of the 35B-A3B model (35 billion total parameters, only 3 billion active per inference) has kicked off a wave of genuinely impressive local AI demonstrations. @sudoingX provided the most concrete example, giving the model a single detailed spec and watching it produce a complete space shooter game:

> "One prompt. Ten files. 3,483 lines of code. Zero handholding... enemy types, particle systems, procedural audio, powerups, boss fights, ship upgrades, parallax backgrounds, everything in one message."

Running on a single RTX 3090 at 112 tokens per second with no API costs. @KSimback confirmed the broader trend: "Seeing many positive reports of running Qwen 35B-A3B locally on modest consumer hardware. No need for a $10k+ Mac Studio." And @cgtwts went further, claiming the model "outperforms all previous Qwen models, beats models that are 6x larger, smarter than Sonnet 4.5" at coding tasks.

On the hardware economics side, @alexocheema laid out why Apple Silicon dominates for local inference: M3 Ultra memory costs $18/GB versus $360/GB for B200 GPUs. "If DeepSeek V4 is >1T parameters, by far the cheapest way to run it will be Apple Silicon." The interesting wrinkle is the harness layer. @sudoingX found that Claude Code's tool-call error handling was the bottleneck, not the model, and switching to OpenCode with the same local model produced much more sustained autonomous coding sessions. The takeaway: model quality has caught up; now the orchestration layer is the differentiator.

AI Makes You More Productive, Then Burns You Out

A Berkeley research study tracking 200 employees over 8 months produced findings that challenge the simple "AI makes everyone more productive" narrative. @aakashgupta broke down the self-reinforcing cycle the researchers identified:

> "AI accelerated tasks, raised speed expectations, workers leaned harder on AI, scope expanded, wider scope created more work, more work demanded more AI. That loop has no natural stopping point. The company never installed one."

The key insight is not that AI failed, but that organizations failed to adapt. Individual capability went up, organizational design stayed frozen, and the gap created burnout. A separate NBER study found productivity gains of just 3% across thousands of workplaces, and 77% of employees in an Upwork survey said AI tools actually decreased their productivity. @harjtaggar captured the ground truth more concisely: "Everybody I know using AI is working more hours not less." Meanwhile, @johnrushx extrapolated Claude Code's usage to "40,000 full-time software developers working full time" and predicted 1 million developer-equivalents by 2027. The tension between these perspectives is the central question of AI adoption: are we building leverage, or just building more work?

Claude Code Ships /simplify and /batch

The Claude Code team announced two new built-in skills that automate post-coding cleanup. @bcherny revealed that "/simplify reviews your changed code for reuse, quality, and efficiency, then fixes any issues found," while /batch handles "straightforward, parallelizable code migrations." @dani_avila7 provided a hands-on look:

> "I ran it after finishing a PR review and noticed it spawned 3 parallel agents using Haiku 4.5 to do the analysis... fast and cheap."

This aligns with @addyosmani's broader argument that "the unsolved problem isn't generation but verification. That's where engineering judgment becomes your highest-leverage skill." The shift from writing code to orchestrating and verifying AI-generated code continues to accelerate, and built-in tools that handle the verification loop automatically represent a meaningful quality-of-life improvement for developers already living inside Claude Code.

Agent Communication Infrastructure Matures

The agent ecosystem is developing its own communication primitives. @mattshumer_ announced Agent Relay, describing it as "Slack for AI agents: channels + threads + DMs + realtime events + search + persistent history." @willwashburn co-announced the launch. Separately, @sukh_saroy highlighted OpenClaw Studio, a self-hosted agent dashboard with "live chat, approval gates, job scheduling, and full visibility."

The most thoughtful contribution came from @blader, who identified a gap in how long-running agent sessions maintain coherence:

> "Plans are high level and static. Session history is shallow and leads to ratholing. Theorist is a layer in between: a continuously updated mental model of the root cause, and the current theory of victory."

This resonates with anyone who has watched an agent lose the plot 30 minutes into a complex task. The infrastructure for multi-agent systems is moving from "can agents talk to each other" to "can agents maintain shared understanding over time," which is a much harder and more interesting problem.

Sources

E
el.cine @EHuanglu ·
wow.. we can actually keyframe every second of AI animation using prompt now https://t.co/WqmbujMam8
E el.cine @EHuanglu

Seedance 2.0 turns kids drawing into 100k film scene.. hollywood is cooked https://t.co/G0NJMMN5qG

E
Ejaaz @cryptopunk7213 ·
the fucking wildest 7 days in U.S. defense history - pentagon revealed they used Claude to capture venezuelan president Maduro - pentagon demands anthropic gives them unadulterated access to claude for mass surveillance and autonomous killing weapons - anthropic says “fuck you” - trump blacklists them calling them woke pussies, Pete Hegseth designates them a “supply-chain risk” - Openai swoops in with better terms stealing anthropic’s deal, securing ChatGPT as the military’s preferred ai model. *5 hours later* - U.S. starts war with Iran and kills supreme leader Khameini insane timeline.
A
Addy Osmani @addyosmani ·
Every abstraction shift in software history made devs more productive by raising the level of intent. This is the next step: from writing code to orchestrating systems that write code (building "the factory" for your code). The unsolved problem isn't generation but verification. That's where engineering judgment becomes your highest-leverage skill. To truly scale, think "factory model" - orchestrate fleets of agents like a production line: clear specs as blueprints, TDD for quality control, strong architecture to amplify leverage.
M Michael Truell @mntruell

The third era of AI software development

A
Aakash Gupta @aakashgupta ·
The headline says AI intensifies work. What the study actually found is more interesting than that. Berkeley researchers tracked 200 employees for 8 months. AI made every single one of them more capable. They wrote code they couldn’t write before. They took on tasks they used to outsource. They moved faster on work that would have sat in a backlog for months. And then they burned out. Because the company changed nothing else. The org handed people a tool that 10x’d their ability to start new work, then kept the org chart, meeting cadence, review processes, and scope boundaries completely identical. Zero workflow redesign. This is like giving everyone a car and keeping the speed limit signs from the horse-and-buggy era. People drove faster because they could, crashed because nobody updated the roads. The self-reinforcing cycle the researchers found is worth sitting with: AI accelerated tasks → raised speed expectations → workers leaned harder on AI → scope expanded → wider scope created more work → more work demanded more AI. That loop has no natural stopping point. The company never installed one. Meanwhile, a separate NBER study across thousands of workplaces found productivity gains of just 3%. And an Upwork survey found 77% of employees say AI tools actually decreased their productivity. The pattern across all of this research is identical: individual capability goes up, organizational design stays frozen, and the gap between the two creates burnout. The study literally recommends companies build an “AI practice” with structured reflection intervals and scope limits. The researchers aren’t saying AI failed. They’re saying management failed to adapt to AI. Every CEO reading this headline as validation for slowing AI adoption is making exactly the wrong bet. The companies that win will be the ones that redesign the operating system around the intensity, not the ones that avoid it.
R Rohan Paul @rohanpaul_ai

Powerful new Harvard Business Review study. "AI does not reduce work. It intensifies it. " A 8-month field study at a US tech company with about 200 employees found that AI use did not shrink work, it intensified it, and made employees busier. Task expansion happened because AI filled in gaps in knowledge, so people started doing work that used to belong to other roles or would have been outsourced or deferred. That shift created extra coordination and review work for specialists, including fixing AI-assisted drafts and coaching colleagues whose work was only partly correct or complete. Boundaries blurred because starting became as easy as writing a prompt, so work slipped into lunch, meetings, and the minutes right before stepping away. Multitasking rose because people ran multiple AI threads at once and kept checking outputs, which increased attention switching and mental load. Over time, this faster rhythm raised expectations for speed through what became visible and normal, even without explicit pressure from managers.

S
Sukh Sroay @sukh_saroy ·
🚨Someone just built a live mission control center for your AI agents and it runs completely on your own hardware. It's called OpenClaw Studio and it's not a toy. It's a real AI agent dashboard with live chat, approval gates, job scheduling, and full visibility into everything your agents are doing right now. Here's what it actually does: → Live dashboard showing every agent running in real time → Chat directly with your agents from the browser → Approve or block dangerous actions before they execute → Schedule automated jobs with built-in cron support → Connects to any OpenClaw Gateway local or cloud → Works from your laptop, phone, or any device on your network → Full WebSocket streaming so you see everything as it happens Enterprise AI observability tools charge $500/month for worse versions of this. This runs on your hardware. Your agents. Your rules. The AI agent control room just got open sourced. 100% Open Source. (Link in the comments)
S
Sudo su @sudoingX ·
this is what a 24gb VRAM builds in 2026. one prompt. ten files. 3,483 lines of code. zero handholding. i gave Qwen3.5-35B-A3B a single detailed spec describing the full game architecture and hit enter. enemy types, particle systems, procedural audio, powerups, boss fights, ship upgrades, parallax backgrounds, everything in one message. the model planned the file structure itself, wrote every module in dependency order, wired all the imports, and served the game on port 3001. it ran on first load. when it hit a bug in collision detection it read its own error output, found the issue, fixed it, and kept building. this is pure agent loop running on local hardware. what you're looking at is pixelated octopus aliens with tentacle animations, 4 layer parallax space background with planets at different depths, a full particle system handling explosions and ink splatter and engine trails and bullet impacts, procedural audio through Web Audio API with zero sound files loaded, unleash mode with combo multiplier, boss fights every 5 levels, ship upgrades that unlock as you progress. no libraries. no frameworks. vanilla JS and Canvas. 3B active parameters. single RTX 3090. llama.cpp with q8_0 KV cache at 262K context. Claude Code pointed at localhost:8080 through the native Anthropic endpoint. no API costs. 112 tok/s. a GPU you can buy used for $800. game is called Octopus Invaders and i actually like playing it.
S Sudo su @sudoingX

testing Qwen3.5-35B-A3B latest optimized version by UnslothAI on a single RTX 3090. one detailed prompt. zero handholding. watch a 3B model scaffold an entire multifile game project autonomously. the setup: > model: Qwen3.5-35B-A3B (80B total, only 3B active per token) > quant: UD-Q4_K_XL by Unsloth (MXFP4 layers removed in latest update) > speed: 112 tok/s generation, ~130 tok/s prefill > context: 262K tokens > flags: -ngl 99 -c 262144 -np 1 --cache-type-k q8_0 --cache-type-v q8_0 > engine: llama.cpp > agent: Claude Code talk to localhost:8080 (llama.cpp now has native Anthropic API endpoint. no LiteLLM needed) q8_0 KV cache cuts VRAM usage in half vs f16 at 262K. -np 1 is default but worth noting. parallel slots multiply KV cache and at 262K that's an instant OOM. the prompt was more detailed than this but you get the idea: build a space shooter with parallax backgrounds, particle systems, procedural audio, 4 enemy types, boss fights, power-up system, and ship upgrades. 8 JavaScript modules. no libraries. game's called Octopus Invaders. gameplay footage dropping next.

M
Matt Shumer @mattshumer_ ·
Agents are turning into teams. Teams need Slack. Agent Relay is that layer for AI agents: channels + threads + DMs + realtime events + search + persistent history. In 12 months, this will feel obvious.
W Will Washburn @willwashburn

Introducing Agent Relay

S
Shanaka Anslem Perera ⚡ @shanaka86 ·
Anthropic just announced it will take the Trump administration to court over the supply chain risk designation. And in the same breath, Axios revealed the detail that changes everything about this story. While Anthropic was being blacklisted for refusing to allow mass surveillance, the Pentagon’s own “compromise deal” that Under Secretary Emil Michael was offering on the phone at the exact moment Hegseth posted the designation on X would have required Anthropic to allow the collection and analysis of Americans’ geolocation data, web browsing history, and personal financial information purchased from data brokers. Read that again. The Pentagon spent two weeks saying it has no interest in mass surveillance of Americans. Then the deal they actually put on the table asked for access to your location, your browsing history, and your financial records. They told us Anthropic was lying. The contract language told us Anthropic was right. Now here is where this becomes an existential question for a $380 billion company. The supply chain risk designation means every company that does business with the Pentagon must certify they do not use Claude. Eight of the ten largest companies in America use Claude. Defense contractors, cloud providers, consulting firms, banks. The blast radius is not the $200 million Pentagon contract. It is the enterprise ecosystem that generates $14 billion in annual revenue. Anthropic’s legal argument is specific: under 10 USC 3252, the designation can only restrict use of Claude on Pentagon contract work. Your commercial API access, your https://t.co/koW5OJjjaM subscription, your enterprise license are, in Anthropic’s reading, completely unaffected. But here is the problem. That is a legal argument. It will take years to resolve in court. And in the meantime, every general counsel at every Fortune 500 company with any Pentagon exposure is going to ask one question: is using Claude worth the risk? The IPO, which was expected this year at a $380 billion valuation backed by $30 billion in fresh capital, is functionally frozen. No underwriter will price an offering while a company carries the same designation as Huawei. And here is the final detail nobody has processed yet. Hours after blacklisting Anthropic, the Pentagon accepted OpenAI’s proposed safety framework, which contains the identical red lines: no mass surveillance, no autonomous lethal weapons. They destroyed one company for a position they then accepted from its competitor. Full analysis on Substack. https://t.co/AEv8EMPdsZ
S Secretary of War Pete Hegseth @SecWar

This week, Anthropic delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon. Our position has never wavered and will never waver: the Department of War must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic. Instead, @AnthropicAI and its CEO @DarioAmodei, have chosen duplicity. Cloaked in the sanctimonious rhetoric of “effective altruism,” they have attempted to strong-arm the United States military into submission - a cowardly act of corporate virtue-signaling that places Silicon Valley ideology above American lives. The Terms of Service of Anthropic’s defective altruism will never outweigh the safety, the readiness, or the lives of American troops on the battlefield. Their true objective is unmistakable: to seize veto power over the operational decisions of the United States military. That is unacceptable. As President Trump stated on Truth Social, the Commander-in-Chief and the American people alone will determine the destiny of our armed forces, not unelected tech executives. Anthropic’s stance is fundamentally incompatible with American principles. Their relationship with the United States Armed Forces and the Federal Government has therefore been permanently altered. In conjunction with the President's directive for the Federal Government to cease all use of Anthropic's technology, I am directing the Department of War to designate Anthropic a Supply-Chain Risk to National Security. Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic. Anthropic will continue to provide the Department of War its services for a period of no more than six months to allow for a seamless transition to a better and more patriotic service. America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.

A
AVB @neural_avb ·
A simple framework to build Agentic Systems that just works
S
Sam Altman @sama ·
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.
A
Aniket @Aniket_Singh04 ·
Nobody’s talking about what just happened to Anthropic: Anthropic built the AI that half the US government quietly depends on daily They were deep in a $200M Pentagon deal — one of the biggest AI contracts ever Anthropic drew two hard lines: Claude won’t surveil American citizens, Claude won’t pull a trigger without a human deciding The Pentagon said those lines needed to go. Anthropic said they weren’t moving (respect 🫡) Trump signed an order cutting Claude from every federal agency overnight The Pentagon then slapped them with a “national security risk” designation — the same one they gave Huawei Every classified system running Claude has 6 months to rip it out completely Sam Altman — Anthropic’s biggest competitor — publicly said OpenAI has the same rules and wouldn’t have budged either The US government just punished a company for refusing to let AI kill or spy unsupervised.
A
Aidan Gold @MrGoldBro ·
Let me get this straight: Anthropic refused to work with DoW unless they could promise their tech wasn't used for surveillance or killing. DoW said that they need full capabilities. Anthropic declined to give full access. OpenAI stood by Anthropic for ensuring AI safety. Trump then cancelled all Anthropic usage across the government, including a $200m contract. OpenAI then submits a bid to replace Anthropic.
M
Mark Gadala-Maria @markgadala ·
Just a few hours ago he was on TV saying he stood by Anthropic. Then he undercuts them and takes the same contract that Anthropic just lost. How can anyone trust this guy?
S Sam Altman @sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

J
Jeffrey Emanuel @doodlestein ·
@Suhail I’m doing this at scale now for a massive number of projects. It’s already here if you use the right tooling, workflows, and prompts: https://t.co/YjjIPsxaxC
W
Will Washburn @willwashburn ·
Introducing Agent Relay
T
Ted Lieu @tedlieu ·
The Department of Defense just agreed to the same two conditions with OpenAI that Anthropic was asking for. Can someone explain? I genuinely don’t understand.
S Sam Altman @sama

Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of benefits are the core of our mission. Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement. We also will build technical safeguards to ensure our models behave as they should, which the DoW also wanted. We will deploy FDEs to help with our models and to ensure their safety, we will deploy on cloud networks only. We are asking the DoW to offer these same terms to all AI companies, which in our opinion we think everyone should be willing to accept. We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements. We remain committed to serve all of humanity as best we can. The world is a complicated, messy, and sometimes dangerous place.

J
John Rush @johnrushx ·
This equals 40,000 full-time software developers working full time. End of 2026: 200,000 developers. 2027: just Claude Code alone will be adding as much code as 1,000,000 full-time human developers. 2028: 1B+ Enjoy your last lines of handwritten code. Horses replaced by cars.
S SemiAnalysis @SemiAnalysis_

4% of GitHub public commits are being authored by Claude Code right now. At the current trajectory, we believe that Claude Code will be 20%+ of all daily commits by the end of 2026. While you blinked, AI consumed all of software development. https://t.co/pFti4r6uR9

A
Anthropic @AnthropicAI ·
A statement on the comments from Secretary of War Pete Hegseth. https://t.co/Gg7Zb09IMR
H
Harj Taggar @harjtaggar ·
Everybody I know using AI is working more hours not less.
R Rohan Paul @rohanpaul_ai

Powerful new Harvard Business Review study. "AI does not reduce work. It intensifies it. " A 8-month field study at a US tech company with about 200 employees found that AI use did not shrink work, it intensified it, and made employees busier. Task expansion happened because AI filled in gaps in knowledge, so people started doing work that used to belong to other roles or would have been outsourced or deferred. That shift created extra coordination and review work for specialists, including fixing AI-assisted drafts and coaching colleagues whose work was only partly correct or complete. Boundaries blurred because starting became as easy as writing a prompt, so work slipped into lunch, meetings, and the minutes right before stepping away. Multitasking rose because people ran multiple AI threads at once and kept checking outputs, which increased attention switching and mental load. Over time, this faster rhythm raised expectations for speed through what became visible and normal, even without explicit pressure from managers.

O
OpenAI @OpenAI ·
We do not think Anthropic should be designated as a supply chain risk and we’ve made our position on this clear to the Department of War.
K
Kevin Simback 🍷 @KSimback ·
The math is mathing even more now! Seeing many positive reports of running Qwen 35B-A3B locally on modest consumer hardware No need for a $10k+ Mac Studio So you get a Sonnet 4.5 grade model that can run privately at home, then you can chat with it on your phone via Tailscale
L LM Studio @lmstudio

Qwen3.5-35B-A3B is now available in LM Studio! This model outperforms previous Qwen models that are more than 6x its size 🤯🚀 Requires about ~21GB to run locally. https://t.co/sBkbpxdwRA

B
Boris Cherny @bcherny ·
In the next version of Claude Code.. We're introducing two new Skills: /simplify and /batch. I have been using both daily, and am excited to share them with everyone. Combined, these kills automate much of the work it used to take to (1) shepherd a pull request to production and (2) perform straightforward, parallelizable code migrations.