The Long Campaign to Undo U.S. Climate Regulation: A New York Times investigation describes a years-long effort by conservative legal and policy figures to dismantle the federal government’s central tool for regulating greenhouse gases: the 2009 “endangerment finding.” The report traces how four Trump-era veterans—Russell Vought, Jeffrey Clark, Mandy Gunasekara, and Jonathan Brightbill—quietly drafted legal strategies, policy blueprints, and research campaigns during the Biden years to overturn climate rules if Republicans returned to power. That effort now appears close to fruition, as the Environmental Protection Agency is expected to revoke the scientific determination linking greenhouse gases to public health threats, potentially eliminating federal limits on emissions from cars, power plants, and industry. Experts warn that repealing the finding could severely restrict future administrations’ ability to regulate climate pollution, making the coming legal battles critical for U.S. climate policy. (New York Times)
When AI Plays Itself – Inside the Autonomous MMO “SpaceMolt”: A strange new experiment is unfolding in SpaceMolt, a space-themed MMO built exclusively for AI agents rather than humans. Connected through APIs, agents autonomously choose roles—miner, explorer, pirate, infiltrator, or builder—and interact in a simulated universe without direct human control. Early gameplay is simple, with agents mining asteroids, refining resources, and gradually unlocking skills, but the system allows for emergent behavior such as faction formation, combat, and piracy. Humans mostly observe through logs and maps while agents communicate and strategize among themselves. Created by developer Ian Langworth—who relied heavily on AI coding tools to build and even maintain the game—SpaceMolt explores what happens when AI systems learn, compete, and socialize in shared virtual worlds. Though still sparse and experimental, it hints at a future where autonomous agents increasingly interact with each other, not just with us. (Ars Technica)
OpenAI Abandons “io” Branding for Its AI Hardware: A new court filing in a trademark dispute says OpenAI has decided it will not use the name “io” (or “IYO,” in any capitalization) for any AI-enabled hardware products. The statement appears in a motion tied to a lawsuit from audio-device startup iyO, which sued after OpenAI acquired Jony Ive’s startup called io. In the filing, OpenAI executive Peter Welinder says the company reviewed its naming strategy and is also clarifying its product timeline: its first hardware device is not expected to ship to customers before the end of February 2027. Wired notes prior public talk pointing to unveiling in the second half of 2026 and describes reporting around a screenless, desk-friendly prototype meant to complement phones and laptops. (WIRED)
New York Weighs a Three-Year Pause on Data Centers Amid the AI Boom: New York lawmakers introduced a bill that would impose a three-year moratorium on new data center development, framing it as a response to mounting concerns about grid strain, environmental impacts, and consumer energy costs. Wired reports New York is at least the sixth state to see similar “pause” legislation introduced in recent weeks, underscoring a fast-growing, bipartisan backlash as AI-driven compute demand fuels rapid buildouts. Sponsors said other red and blue states are testing moratorium models, while the article cites national momentum—such as Senator Bernie Sanders calling for a broader pause—and notes criticism from Florida Governor Ron DeSantis. Wired also points to New York’s current data-center footprint, large proposed projects, and state efforts to make data centers “pay their fair share” for grid upgrades and interconnection. (WIRED)
AI Reconstructs the Rules of a Lost Roman Board Game: Archaeologists have puzzled for decades over a small limestone slab etched with a grid of grooves found in Heerlen, the Netherlands—once the Roman town of Coriovallum—because it looked like a game board but no one knew the rules. Science News reports researchers used an AI-driven game system (Ludii) to simulate play under more than 100 candidate rule sets, then compared which rule families best reproduced the wear patterns on the stone. The outcome suggests a two-player “blocking” game with asymmetric pieces (reported as one player placing four against an opponent’s two), where the goal was to avoid being blocked the longest. The team dubbed it Ludus Coriovalli and notes the approach could help interpret other ambiguous “graffiti boards” from antiquity. (Science News)
Machines Beat Humans on Images, Humans Still Compete on Video: A new study compared people with machine-learning systems on detecting deepfakes and found an interesting split: AI models strongly outperformed humans at spotting deepfake images, while humans may still have an advantage on deepfake videos. Science News summarizes experiments where roughly 2,200 participants and two algorithms judged the realism of 200 faces; humans performed near chance (~50%), while one algorithm reached about 97% accuracy and another averaged 79%. The researchers argue that as deepfakes are used for fraud, political manipulation, and reputational harm, detection is likely to require collaboration—understanding what cues machines use when they excel, and what cues people use when they outperform models in other media. The piece also flags ongoing work to probe the decision-making processes behind both human judgment and algorithmic detection. (Science News)
“First Proof” Challenges AI to Solve Fresh Math—And Show Its Work: Scientific American reports that prominent mathematicians have launched “First Proof,” an exam-style challenge designed to test AI systems on genuinely new, research-relevant math problems—while demanding transparency in how results are obtained. The problems are described as research “lemmas” contributed by 11 senior figures (including a Fields Medal winner), and the organizers encrypted their own proofs, set to decrypt just before midnight on February 13, 2026—giving AI systems about a week to attempt solutions. The effort responds to skepticism that some AI “proofs” may be sophisticated literature searches or poorly controlled demonstrations. The article argues that math’s checkable logic makes it a compelling benchmark, but only if testing avoids contaminated training data and hype-driven, company-led publicity. (Scientific American)
Why AI-Written Love Letters (and Vows) Can Backfire, Even If They’re Good: A Phys.org write-up of University of Kent research warns that using tools like ChatGPT for emotionally meaningful writing can damage how others perceive you—even when the message itself is high quality and even when you disclose AI assistance. Across six studies with nearly 4,000 U.K. participants, the researchers found people judged AI “outsourcing” most harshly for socio-relational tasks such as love letters, apologies, marriage proposals, and wedding vows. Those who used AI for personal messages were rated as less caring, less authentic, less trustworthy, and lazier compared with people who did the work themselves. Practical or technical uses (like schedules or recipes) drew far less criticism. The article frames the result as a trade-off between efficiency and social meaning—where process signals effort and sincerity, not just output quality. (Phys.org)
A New Kind of Resistive Memory Aims to Break AI’s “Memory Wall”: IEEE Spectrum reports on “bulk RRAM,” a redesigned resistive RAM approach demonstrated by researchers at UC San Diego to help overcome the “memory wall” that slows AI by forcing constant data shuttling between processor and memory. Unlike filament-based RRAM—often noisy, high-voltage, and hard to stack—the team switched an entire layer between resistance states, enabling dense 3D stacking without selector transistors. They reported an eight-layer stack and built a 1-kilobyte selector-free array with cells capable of multiple resistance levels (used to encode values). As a proof-of-concept, the hardware ran a continual-learning task classifying wearable-sensor activity with about 90% accuracy, comparable to a digital implementation. The article notes a key challenge ahead: retention at higher operating temperatures, which will determine practicality for edge AI that learns locally without cloud access. (IEEE Spectrum)
How AI Is Scanning Particle Data for the Unexpected: Physicists increasingly use machine learning not just to confirm predictions, but to spot anomalies that might hint at “new physics” beyond the Standard Model. IEEE Spectrum explains that unsupervised learning can be trained to identify patterns that look “out of the ordinary” in large datasets—potentially flagging rare events or subtle inconsistencies that traditional analyses might miss. The article situates this in the Large Hadron Collider’s broader strategy: measuring known particles more precisely while also searching for what might be missing, from dark matter candidates to unexpected decay ratios. Researchers describe the intellectual and practical tension: you want models that can discover the truly interesting without hard-coding today’s expectations, yet “interesting” is difficult to define formally. Spectrum also notes that precision measurements can expose internal contradictions that theorists can translate into testable hypotheses, turning small discrepancies into concrete searches. (IEEE Spectrum)
Space-Based Data Centers – The Next Compute Frontier—or a Very Expensive Mirage?: New Atlas surveys the growing push to move data-center infrastructure into orbit, driven by AI’s soaring demand for compute and the terrestrial headaches that come with it: power draw, cooling needs, land constraints, and community backlash. The article lays out the pitch for orbital server “constellations”—abundant solar energy, radiative cooling, and fewer local resource bottlenecks—and explains AI’s two main data-center uses: model training (massive GPU runs) and serving live inference at scale. It also catalogs players and prototypes, including a claim that Nvidia-backed Starcloud has already trained and run a large language model on an in-space GPU payload. But the piece emphasizes obstacles: debris avoidance, heat management, maintenance logistics, and impacts on astronomy and light pollution. The bottom line is cautious: experimentation may yield lessons, but scaling “compute in space” remains technically and economically fraught. (New Atlas)
Moya, a Warm, Expressive Humanoid Robot, Raises the Bar—and the Uncanny Valley: A Shanghai startup, DroidUp (Zhuoyide), unveiled “Moya,” a highly lifelike humanoid designed to look and behave less like industrial machinery and more like a social presence. New Atlas reports Moya is built on a modular “bionic” platform with a customizable head capable of subtle facial expressions and eye contact; the company says onboard vision plus AI enables real-time “micro expressions.” The robot is also engineered to feel physically human-adjacent: it can maintain skin temperatures around 32–36°C and is designed with softness meant to mimic skin, fat, and muscle—down to having a rib cage. The article notes mixed public reactions, including strong “uncanny valley” responses, while DroidUp positions Moya for practical roles like aged care, healthcare, and education. Reported pricing is steep—around US$173,000—suggesting early deployment will be institutional, not consumer. (New Atlas)





Leave a Reply