MAHA Movement Fractures After RFK Jr. Backs Glyphosate Production Order: Members of the Make America Health Again (MAHA) movement are openly rebelling after founder Robert F. Kennedy Jr. supported President Trump’s executive order to expand U.S. production of glyphosate, a pesticide long opposed by Kennedy and his followers. Influencers and activists expressed shock, accusing the administration of prioritizing corporate and national-security interests over public health. Critics within the movement warned the decision could erode political trust and damage electoral prospects, especially among health-focused supporters drawn to MAHA’s anti-pesticide stance. Kennedy defended the move as necessary for food security and defense readiness, citing reliance on foreign glyphosate imports. The backlash highlights a widening rift between political pragmatism and activist expectations, with prominent MAHA voices warning the decision could have lasting political and public-health consequences. (Ars Technica)

Can consciousness ever be understood — this side of death?: Christof Koch reviews A World Appears, Michael Pollan’s wide-angle tour through the modern consciousness conversation, framed by Pollan’s own psychedelic experiences and a reporter’s instinct for intellectual battle lines. The review tracks Pollan’s central tension: consciousness research is increasingly empirical and computational, yet subjective experience still resists clean reduction to brain “parts lists” or silicon analogies. Pollan canvasses philosophers, neuroscientists, psychologists, and biologists (including provocative work on plant signaling) while trying to keep curiosity from turning into naïve anthropomorphism. Koch emphasizes the book’s value as synthesis: it treats consciousness as a scientific target and a lived mystery, and it asks what might be lost when we outsource meaning to mechanistic metaphors. (Nature)

New platform lets AI agents hire human helpers: Nature’s daily briefing spotlights a new marketplace that lets autonomous AI agents pay humans to complete tasks the agents can’t reliably do themselves—an explicit “hybrid cognition” workflow rather than a hidden labor layer. The item frames the idea as a practical patch for agent limitations: when an agent hits a wall (verification, real-world judgment, specialized knowledge), it can route a micro-task to a person and continue operating. That raises immediate questions about accountability, transparency, worker protections, and whether “agentic” systems will become more effective by leaning on distributed human cognition. The briefing also notes additional science-policy threads in the roundup, but the headline story is a preview of an economy where human attention becomes an on-demand peripheral for software minds. (Nature)

There is no machine consciousness: The inference principle: This paper argues that calling today’s AI “conscious” fails on a principled gap between behavior and experience. The author centers an “inference principle”: from the outside, we infer consciousness in other humans (and perhaps some animals) because they share our biological organization, developmental history, and the kinds of internal causes we know accompany experience. Large language models can mimic the outputs of conscious agents without having the same kind of causal interior—especially the embodied, affect-laden, homeostatic machinery that many theories treat as essential. The essay doesn’t claim AI can’t be powerful or useful; it claims that, given what consciousness explanations require, the inference from fluent text to felt experience is unjustified. (Neuroscience of Consciousness)

Level of consciousness from a third-person perspective: This article tackles a deceptively hard problem: “level of consciousness” is often treated as a single dial (awake → drowsy → anesthetized), but clinicians and researchers must assess it from the outside. The author analyzes what third-person measures can legitimately claim—behavioral responsiveness, arousal, reportability, and neurophysiological signatures—and where they can mislead, especially in disorders of consciousness or pharmacological states where motor output is impaired. A key theme is operational clarity: if “level” collapses different capacities into one score, it can hide dissociations (high arousal with low awareness, or preserved awareness with limited action). The paper presses for definitions that match measurement, so “level” becomes a testable construct rather than a vague clinical impression. (Neuroscience of Consciousness)

DMT can put depression into remission — but is it safe enough?: The Guardian reports on clinical trial evidence that a single dose of DMT, paired with structured psychological support, can rapidly reduce depressive symptoms for some participants—raising hopes for a fast-acting psychedelic intervention. The piece situates DMT within the broader psychedelic “renaissance,” emphasizing both the promise (speed, intensity, potential durability) and the practical risks: the experience is brief but extremely immersive, and safety hinges on screening, supervision, and integration rather than the molecule alone. It also stresses the limits of early trials: sample sizes, expectancy effects, and questions about who benefits most. The article treats DMT as a neuroscience story as much as a mental-health one—an engineered perturbation of consciousness being tested as medicine. (The Guardian)

Finding consciousness outside the brain — and a tiny robot walks on legs: This episode pairs two frontiers stories: first, a discussion of whether consciousness (or its necessary ingredients) might extend beyond the brain in ways current neuroscience misses—covering competing frameworks and the evidential standards that would be required to move such claims from speculation to science. Second, it turns to robotics: a small, legged machine that demonstrates new approaches to locomotion and control, illustrating how biological principles can inspire engineered systems without copying biology one-to-one. The consciousness segment is careful about boundary conditions—what counts as evidence, what counts as theory, and why “explanations” can be seductive even when they don’t yet predict. The robotics segment offers a concrete counterpoint: mechanisms you can build, test, and iterate. (Science)

Two-month-old babies can see the world in a more complex way than scientists thought: New work suggests that by around two months of age, infants’ visual systems may already support richer processing than classic developmental timelines imply. The report describes experiments designed to probe what very young babies can discriminate and how their attention tracks structure in what they see, challenging the notion that early vision is mostly blurry input awaiting months of cortical refinement. The story emphasizes method: because infants can’t verbally report, researchers rely on carefully controlled displays and measures like looking time to infer perception. The results feed into a larger question relevant to consciousness research: how early in development do the building blocks of awareness—selective attention, prediction, stable percepts—begin to operate? The piece frames the findings as a recalibration, not a final answer. (phys.org)

Photons, logic, and a step toward scalable quantum computing: Researchers report progress on a photonic quantum logic gate—one of the basic operations needed to compute with light-based qubits. The article explains why photons are attractive carriers of quantum information (low noise, room-temperature operation) and why gates are hard: you must make fragile quantum states interact reliably without destroying them. The reported advance focuses on improving how quantum logic can be executed and verified in optical setups, a key bottleneck for scaling beyond lab demonstrations. While the story is framed as quantum technology, it also connects to cognition-adjacent themes: computation at the physical limits of information, and how “hardware realities” shape what kinds of processing become possible. The piece stresses that this is an enabling component—important, but one step in a larger engineering stack (sources, detectors, error handling, integration). (phys.org)

Opposing network patterns of integration–segregation in conscious states: This new preprint tackles a central claim in consciousness science: conscious states may require a dynamic balance between brain-wide integration (information sharing across networks) and segregation (specialized processing in distinct modules). The authors analyze large-scale brain dynamics across different states and report patterns suggesting that shifts toward (or away from) integration and segregation can be opposed in meaningful, state-dependent ways—potentially helping discriminate conscious processing from anesthesia- or sleep-like regimes. Because it’s a preprint, the results are provisional, but the paper is valuable for its mechanistic framing: instead of asking where consciousness “lives,” it asks how networks coordinate, reconfigure, and stabilize. If the findings hold, they could sharpen biomarkers and guide theory-testing by linking consciousness to measurable properties of distributed dynamics rather than single-region activity. (bioRxiv)

Psychedelics may turn memory into hallucination by turning down reality: A new report synthesizes research suggesting a specific computational shift under psychedelics: the brain weights internal models and memory-driven predictions more heavily while down-weighting incoming sensory evidence—especially in visual processing. In that framing, hallucinations aren’t random “noise”; they’re perception dominated by prior beliefs and reactivated memory content when bottom-up constraints are reduced. The article links the effect to serotonergic mechanisms commonly implicated in psychedelic states and highlights why this matters for consciousness science: it offers a concrete lever for studying how the brain constructs the feeling of reality in real time. It also underscores the clinical angle—if psychedelics temporarily loosen rigid predictive loops, that could help explain reported therapeutic effects, while also clarifying why set, setting, and supervision are safety-critical. (SciTech Daily)

Proposed Restrictions at NIST Spark Fears of Scientific Brain Drain: New measures under consideration at the National Institute of Standards and Technology (NIST) could limit the role of foreign-born researchers, prompting warnings from lawmakers and scientists that the United States risks losing critical expertise. NIST, central to fields from cybersecurity to semiconductor standards, relies heavily on international talent. Reports indicate proposed rules could shorten research appointments, tighten security protocols, and restrict lab access for noncitizens, with some recruitment already canceled amid uncertainty. Supporters say the changes aim to protect U.S. research from intellectual-property theft, but critics argue they exceed reasonable safeguards and threaten the agency’s global credibility. Former officials and legislators warn that even rumors of restrictions may deter top researchers, potentially weakening U.S. scientific leadership, innovation capacity, and long-term competitiveness in strategic technologies. (Ars Technica)

Leave a Reply

Trending

Discover more from Scientific Inquirer

Subscribe now to keep reading and get access to the full archive.

Continue reading