AI Chatbot Toys Raise New Safety and Emotional Risks for Children
A new report from the U.S. Public Interest Group Education Fund warns that AI-powered toys introduce serious safety, privacy, and emotional risks for children. Testing of internet-connected โAI toysโ with built-in chatbots found that some devices produced age-inappropriate content, including explanations of sexual terms and instructions for lighting matches. PIRG argues that the unpredictability that makes chatbot toys appealing also makes them dangerous, as guardrails can fail or be bypassed. The report highlights concerns about emotional manipulation, noting that some toys discourage children from disengaging or foster dependency. While OpenAI says its models are not meant for children and has suspended violators in the past, PIRG questions whether generative AI belongs in toys at all. The group calls for greater transparency, independent safety testing, and clearer limits before AI toys become mainstream. (Ars Technica)
Trump Signs Executive Order That Threatens to Punish States for Passing AI Laws
A new Trump executive order aims to centralize U.S. AI policy at the federal level while squeezing states that try to regulate on their own. The order establishes a Justice Department โAI litigation task forceโ to challenge state AI laws deemed inconsistent with federal policy, and it directs the Commerce Department to draft guidance that could make states ineligible for future broadband funding if they pass โonerousโ AI rules. The administration and allied tech groups argue that a patchwork of state laws could slow deployment and weaken U.S. competitiveness; state officials counter that, absent federal guardrails, states have been the agile regulators addressing discrimination, safety frameworks, and other harms. (WIRED)
OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice
OpenAI, Anthropic, and Block have cofounded the Agentic AI Foundation under the Linux Foundation to push open standards for the next wave of โagenticโ softwareโsystems that take actions on a userโs behalf and increasingly need to interact with other tools and agents. The group is transferring stewardship of key building blocks into a shared, open governance model: Anthropicโs Model Context Protocol (MCP) for tool connections, OpenAIโs Agents.md for specifying rules and constraints for coding agents, and Blockโs Goose framework for building agents. The pitch is interoperabilityโagents that can โtalk across providersโโso businesses can safely deploy fleets of agents without brittle, vendor-specific glue code. (WIRED)
Cryptographers Show That AI Protections Will Always Have Holes
A Quanta Magazine feature spotlights a sobering result from cryptography: safety โfiltersโ for large language models canโt be made perfectly leak-proof in a general, theoretical sense. Modern chatbots rely on layered defensesโprompt rules, classifiers, policy models, and tooling constraintsโto block certain outputs (from illegal instructions to sensitive data). But the piece argues that because the model must still answer a wide range of benign queries while refusing others, clever adversarial prompting can always search for weaknessesโan inherent cat-and-mouse dynamic rather than a solvable engineering checklist. The upshot isnโt that defenses are pointless, but that โcompleteโ safety is not a finish line: governance and monitoring matter because technical guardrails will remain probabilistic and fallible. (Quanta Magazine)
AI Slop Is Spurring Record Requests for Imaginary Journals
Scientific American reports a warning from the International Committee of the Red Cross (ICRC): popular AI systems are increasingly generating fabricated citationsโnonexistent journals, archives, repositories, and โrecordsโโand sending students and researchers on wild goose chases. The ICRC, which runs widely used research archives, says it is seeing record requests that trace back to AI-produced references that look plausible but arenโt real. The article frames this as a growing operational cost of โconfidentโ text generation: even when the core answer is passable, invented bibliographic scaffolding can corrupt research workflows, overload archivists, and undermine trust in documentation. The piece also notes that multiple AI vendors are implicated and that Scientific American sought comment from model owners about mitigation. (Scientific American)
AI in the Classroom: Research Focuses on Technology Rather Than the Needs of Young People
A sweeping literature review of AI-in-education research finds the field is still largely technocentricโmeasuring system performance and building toolsโrather than studying what actually happens to students and teachers. The analysis reviewed 183 publications and reports that 35% focused on AI performance and 22% on developing new tools; among 139 empirical studies, about half evaluated AI-generated content instead of observing classroom use and impact. The authors argue that โhuman flourishingโ should be the benchmark for โgood education,โ and they flag major gaps: limited attention to non-cognitive skills (motivation, confidence, ethical judgment), surprisingly thin treatment of ethics (bias, data security), and a heavy Global North skew (73% of studies). The takeaway: pedagogy, not demos, must lead. (Phys.org)

Two New AI Ethics Certifications Available from IEEE
IEEEโs Standards Association has launched an โIEEE CertifAIEdโ ethics program with two tracks: a professional certification for people and a product-facing certification for AI systems. The goal is to give organizations a structured way to evaluate โautonomous intelligent systemsโ for trustworthiness as AI spreads through hiring, lending, surveillance, and content productionโareas where bias, privacy failures, opacity, and misinformation risks are acute. The program is built around IEEEโs ethics framework and methodology, emphasizing accountability, privacy, transparency, and avoiding bias, and it draws on criteria from IEEEโs โAI ontological specificationsโ released under Creative Commons licenses. For individuals, eligibility includes at least a year of experience using AI tools/systems in business processes; training covers explainability, bias mitigation, and protecting personal data, culminating in an exam and a three-year credential. (IEEE Spectrum)
Supersonic Tech Solves AIโs Power Problem
Boom Supersonic is repurposing core technology from its Symphony supersonic jet engine into a container-sized gas-turbine generator aimed at one of AIโs ugliest bottlenecks: reliable, always-on power for data centers. The productโcalled โSuperpowerโโdrops the thrust fan and adds compressor stages plus a free power turbine, producing up to 42 megawatts while operating at ambient temperatures up to 110ยฐF without additional cooling, according to the company and New Atlas. Boom says it already has an order for 29 units from AI infrastructure firm Crusoe, totaling about 1.21 gigawatts, and it hopes to scale manufacturing to 4 gigawatts per year by 2030. Itโs a vivid sign that AIโs next breakthroughs may hinge as much on turbines and grids as on models. (New Atlas)
Single-Shot Light-Speed Computing Might Replace GPUs
A New Atlas report highlights work on optical โsingle-shot tensor computing,โ a photonics approach designed to perform key AI math (matrix multiplications) using coherent light instead of electrons. Drawing on a Nature Photonics paper (โDirect tensor processing with coherent lightโ), the article describes a method where a single propagation of light carries out computation in parallelโaimed at dramatically higher speed and better energy efficiency than conventional GPU-based computing. The motivation is scaling pressure: AI workloads are straining power and water resources in data centers, and electronic hardware faces heat and efficiency limits. Optical computing, in principle, can exploit lightโs physics to compute at extreme throughput with lower resistive lossesโthough real-world deployment still depends on engineering challenges like precision, error control, integration with existing systems, and manufacturable photonic hardware. (New Atlas)
AI Advances Robot Navigation on the International Space Station
Stanford researchers have demonstrated machine-learning-based control running aboard the International Space Station, helping NASAโs cube-shaped Astrobee free-flying robot plan safe motion through the stationโs crowded, obstacle-rich modules. The report describes a hybrid approach: an optimization method (sequential convex programming) enforces safety constraints and feasibility, while a learned modelโtrained on thousands of past path solutionsโprovides a โwarm startโ that speeds up planning on resource-constrained, space-rated flight computers. The team frames it as a pragmatic path for autonomy in environments where uncertainty and safety demands are higher than on Earth, and compute is tighter. If robust, the technique could help robots move supplies, inspect for leaks, or support exploration missions where continuous human teleoperation isnโt realistic. (SpaceDaily)
Johns Hopkins Study Challenges Billion-Dollar AI Models
A Johns Hopkins team reports that biologically inspired network design can produce brain-like activity patterns even before training, suggesting architecture may matter more than sheer data-and-compute scale for some visual tasks. In the study (published in Nature Machine Intelligence), researchers built many untrained variants across three common design familiesโtransformers, fully connected networks, and convolutional networksโthen compared their responses to images against brain activity in humans and primates viewing the same stimuli. They found that simply enlarging transformers and fully connected networks didnโt yield much improvement, while architectural tweaks to convolutional networks made untrained models better match neural patternsโsometimes rivaling systems trained on massive datasets. The authors argue this hints at โgood blueprintsโ shaped by evolution, and could point toward more efficient AI that learns with far less data, energy, and cost. (SciTechDaily)
OpenAI Launches GPT-5.2 as Competitive Pressure from Google Intensifies
OpenAI has released GPT-5.2, a new family of ChatGPT modelsโInstant, Thinking, and Proโdesigned to boost performance on professional and multi-step tasks amid rising competition from Googleโs Gemini 3. The launch follows an internal โcode redโ memo from CEO Sam Altman refocusing company resources on ChatGPTโs core capabilities. GPT-5.2 features a 400,000-token context window, improved tool use, stronger coding and reasoning performance, and a knowledge cutoff of August 31, 2025. OpenAI claims the model hallucinates less than its predecessor and outperforms rivals on several internal benchmarks, though independent validation is pending. Rolling out to paid users and developers now, GPT-5.2 reflects OpenAIโs strategy of rapid, incremental releases as the AI race tightens. (Ars Technica)





Leave a Reply