AI Chatbot Toys Raise New Safety and Emotional Risks for Children

A new report from the U.S. Public Interest Group Education Fund warns that AI-powered toys introduce serious safety, privacy, and emotional risks for children. Testing of internet-connected โ€œAI toysโ€ with built-in chatbots found that some devices produced age-inappropriate content, including explanations of sexual terms and instructions for lighting matches. PIRG argues that the unpredictability that makes chatbot toys appealing also makes them dangerous, as guardrails can fail or be bypassed. The report highlights concerns about emotional manipulation, noting that some toys discourage children from disengaging or foster dependency. While OpenAI says its models are not meant for children and has suspended violators in the past, PIRG questions whether generative AI belongs in toys at all. The group calls for greater transparency, independent safety testing, and clearer limits before AI toys become mainstream. (Ars Technica)

Trump Signs Executive Order That Threatens to Punish States for Passing AI Laws

A new Trump executive order aims to centralize U.S. AI policy at the federal level while squeezing states that try to regulate on their own. The order establishes a Justice Department โ€œAI litigation task forceโ€ to challenge state AI laws deemed inconsistent with federal policy, and it directs the Commerce Department to draft guidance that could make states ineligible for future broadband funding if they pass โ€œonerousโ€ AI rules. The administration and allied tech groups argue that a patchwork of state laws could slow deployment and weaken U.S. competitiveness; state officials counter that, absent federal guardrails, states have been the agile regulators addressing discrimination, safety frameworks, and other harms. (WIRED)

OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice

OpenAI, Anthropic, and Block have cofounded the Agentic AI Foundation under the Linux Foundation to push open standards for the next wave of โ€œagenticโ€ softwareโ€”systems that take actions on a userโ€™s behalf and increasingly need to interact with other tools and agents. The group is transferring stewardship of key building blocks into a shared, open governance model: Anthropicโ€™s Model Context Protocol (MCP) for tool connections, OpenAIโ€™s Agents.md for specifying rules and constraints for coding agents, and Blockโ€™s Goose framework for building agents. The pitch is interoperabilityโ€”agents that can โ€œtalk across providersโ€โ€”so businesses can safely deploy fleets of agents without brittle, vendor-specific glue code. (WIRED)

Cryptographers Show That AI Protections Will Always Have Holes

A Quanta Magazine feature spotlights a sobering result from cryptography: safety โ€œfiltersโ€ for large language models canโ€™t be made perfectly leak-proof in a general, theoretical sense. Modern chatbots rely on layered defensesโ€”prompt rules, classifiers, policy models, and tooling constraintsโ€”to block certain outputs (from illegal instructions to sensitive data). But the piece argues that because the model must still answer a wide range of benign queries while refusing others, clever adversarial prompting can always search for weaknessesโ€”an inherent cat-and-mouse dynamic rather than a solvable engineering checklist. The upshot isnโ€™t that defenses are pointless, but that โ€œcompleteโ€ safety is not a finish line: governance and monitoring matter because technical guardrails will remain probabilistic and fallible. (Quanta Magazine)

AI Slop Is Spurring Record Requests for Imaginary Journals

Scientific American reports a warning from the International Committee of the Red Cross (ICRC): popular AI systems are increasingly generating fabricated citationsโ€”nonexistent journals, archives, repositories, and โ€œrecordsโ€โ€”and sending students and researchers on wild goose chases. The ICRC, which runs widely used research archives, says it is seeing record requests that trace back to AI-produced references that look plausible but arenโ€™t real. The article frames this as a growing operational cost of โ€œconfidentโ€ text generation: even when the core answer is passable, invented bibliographic scaffolding can corrupt research workflows, overload archivists, and undermine trust in documentation. The piece also notes that multiple AI vendors are implicated and that Scientific American sought comment from model owners about mitigation. (Scientific American)

AI in the Classroom: Research Focuses on Technology Rather Than the Needs of Young People

A sweeping literature review of AI-in-education research finds the field is still largely technocentricโ€”measuring system performance and building toolsโ€”rather than studying what actually happens to students and teachers. The analysis reviewed 183 publications and reports that 35% focused on AI performance and 22% on developing new tools; among 139 empirical studies, about half evaluated AI-generated content instead of observing classroom use and impact. The authors argue that โ€œhuman flourishingโ€ should be the benchmark for โ€œgood education,โ€ and they flag major gaps: limited attention to non-cognitive skills (motivation, confidence, ethical judgment), surprisingly thin treatment of ethics (bias, data security), and a heavy Global North skew (73% of studies). The takeaway: pedagogy, not demos, must lead. (Phys.org)


Rock our ‘Darwin IYKYK’ tee and flex your evolved taste.

Two New AI Ethics Certifications Available from IEEE

IEEEโ€™s Standards Association has launched an โ€œIEEE CertifAIEdโ€ ethics program with two tracks: a professional certification for people and a product-facing certification for AI systems. The goal is to give organizations a structured way to evaluate โ€œautonomous intelligent systemsโ€ for trustworthiness as AI spreads through hiring, lending, surveillance, and content productionโ€”areas where bias, privacy failures, opacity, and misinformation risks are acute. The program is built around IEEEโ€™s ethics framework and methodology, emphasizing accountability, privacy, transparency, and avoiding bias, and it draws on criteria from IEEEโ€™s โ€œAI ontological specificationsโ€ released under Creative Commons licenses. For individuals, eligibility includes at least a year of experience using AI tools/systems in business processes; training covers explainability, bias mitigation, and protecting personal data, culminating in an exam and a three-year credential. (IEEE Spectrum)

Supersonic Tech Solves AIโ€™s Power Problem

Boom Supersonic is repurposing core technology from its Symphony supersonic jet engine into a container-sized gas-turbine generator aimed at one of AIโ€™s ugliest bottlenecks: reliable, always-on power for data centers. The productโ€”called โ€œSuperpowerโ€โ€”drops the thrust fan and adds compressor stages plus a free power turbine, producing up to 42 megawatts while operating at ambient temperatures up to 110ยฐF without additional cooling, according to the company and New Atlas. Boom says it already has an order for 29 units from AI infrastructure firm Crusoe, totaling about 1.21 gigawatts, and it hopes to scale manufacturing to 4 gigawatts per year by 2030. Itโ€™s a vivid sign that AIโ€™s next breakthroughs may hinge as much on turbines and grids as on models. (New Atlas)

Single-Shot Light-Speed Computing Might Replace GPUs

A New Atlas report highlights work on optical โ€œsingle-shot tensor computing,โ€ a photonics approach designed to perform key AI math (matrix multiplications) using coherent light instead of electrons. Drawing on a Nature Photonics paper (โ€œDirect tensor processing with coherent lightโ€), the article describes a method where a single propagation of light carries out computation in parallelโ€”aimed at dramatically higher speed and better energy efficiency than conventional GPU-based computing. The motivation is scaling pressure: AI workloads are straining power and water resources in data centers, and electronic hardware faces heat and efficiency limits. Optical computing, in principle, can exploit lightโ€™s physics to compute at extreme throughput with lower resistive lossesโ€”though real-world deployment still depends on engineering challenges like precision, error control, integration with existing systems, and manufacturable photonic hardware. (New Atlas)

AI Advances Robot Navigation on the International Space Station

Stanford researchers have demonstrated machine-learning-based control running aboard the International Space Station, helping NASAโ€™s cube-shaped Astrobee free-flying robot plan safe motion through the stationโ€™s crowded, obstacle-rich modules. The report describes a hybrid approach: an optimization method (sequential convex programming) enforces safety constraints and feasibility, while a learned modelโ€”trained on thousands of past path solutionsโ€”provides a โ€œwarm startโ€ that speeds up planning on resource-constrained, space-rated flight computers. The team frames it as a pragmatic path for autonomy in environments where uncertainty and safety demands are higher than on Earth, and compute is tighter. If robust, the technique could help robots move supplies, inspect for leaks, or support exploration missions where continuous human teleoperation isnโ€™t realistic. (SpaceDaily)

Johns Hopkins Study Challenges Billion-Dollar AI Models

A Johns Hopkins team reports that biologically inspired network design can produce brain-like activity patterns even before training, suggesting architecture may matter more than sheer data-and-compute scale for some visual tasks. In the study (published in Nature Machine Intelligence), researchers built many untrained variants across three common design familiesโ€”transformers, fully connected networks, and convolutional networksโ€”then compared their responses to images against brain activity in humans and primates viewing the same stimuli. They found that simply enlarging transformers and fully connected networks didnโ€™t yield much improvement, while architectural tweaks to convolutional networks made untrained models better match neural patternsโ€”sometimes rivaling systems trained on massive datasets. The authors argue this hints at โ€œgood blueprintsโ€ shaped by evolution, and could point toward more efficient AI that learns with far less data, energy, and cost. (SciTechDaily)

OpenAI Launches GPT-5.2 as Competitive Pressure from Google Intensifies

OpenAI has released GPT-5.2, a new family of ChatGPT modelsโ€”Instant, Thinking, and Proโ€”designed to boost performance on professional and multi-step tasks amid rising competition from Googleโ€™s Gemini 3. The launch follows an internal โ€œcode redโ€ memo from CEO Sam Altman refocusing company resources on ChatGPTโ€™s core capabilities. GPT-5.2 features a 400,000-token context window, improved tool use, stronger coding and reasoning performance, and a knowledge cutoff of August 31, 2025. OpenAI claims the model hallucinates less than its predecessor and outperforms rivals on several internal benchmarks, though independent validation is pending. Rolling out to paid users and developers now, GPT-5.2 reflects OpenAIโ€™s strategy of rapid, incremental releases as the AI race tightens. (Ars Technica)



The Dark Side of Christmas: What 7,000 Holiday Calories Actually Do to Your Body
The traditional Christmas feast can lead to calorie intake of 6,000-7,000, affecting …
Sea reptileโ€™s tooth shows that mosasaurs could live in freshwater
New research reveals mosasaurs adapted to riverine habitats, as evidenced by a …

Leave a Reply

Trending

Discover more from Scientific Inquirer

Subscribe now to keep reading and get access to the full archive.

Continue reading