Boston Library Partners with Harvard and OpenAI to Digitize Historic Documents
Boston Public Library is collaborating with OpenAI and Harvard Law School to digitize its vast collection of historically significant government documents dating back to the early 1800s. The collection includes oral histories, congressional reports, and industry surveys that currently require in-person visits to access. The project aims to digitize 5,000 documents by year-end, enhancing metadata and enabling global searchability. OpenAI helps fund scanning and project management costs while gaining access to high-quality, copyright-free materials for training language models. Harvard’s Institutional Data Initiative facilitates the partnership, ensuring both improved library patron experiences and sustainable AI data ecosystems. Library professionals express cautious optimism about the collaboration while noting cultural differences between public institutions and Silicon Valley’s rapid-pace approach. The digitized materials will be publicly accessible, not exclusively available to OpenAI. (NPR)
Corporate Leaders Delay AI Job Replacements Due to Political Fears
AI technology is already capable of replacing millions of jobs, but mass layoffs haven’t begun because CEOs fear being the first to announce significant AI-driven job cuts. Corporate leaders like Palantir’s Alex Karp openly discuss plans to increase revenue tenfold while reducing workforce by 12.2%, and Amazon’s Andy Jassy warns of fewer traditional jobs ahead. Instead of dramatic firings, companies implement hiring freezes, forcing managers to justify why humans are needed over AI. Entry-level corporate job listings have declined 15% over the past year, particularly affecting young workers. Companies have announced over 806,000 private-sector job cuts since January, with AI cited as a top contributing factor. The article suggests this “quiet revolution” represents corporate leaders waiting for political cover that isn’t coming, while politicians remain unprepared for the displacement crisis. (Gizmodo)
AI Industry Faces Potentially Devastating Copyright Class Action Lawsuit
AI industry groups are urging an appeals court to block what they describe as the largest copyright class action ever certified, involving up to 7 million claimants against Anthropic over AI training practices. The lawsuit, brought by three authors, could result in hundreds of billions in damages with each claimant potentially receiving $150,000. Anthropic argues the district court judge rushed certification without proper analysis, creating coercive settlement pressure that threatens the entire AI industry’s future. Industry associations warn this could chill AI investment and harm America’s technological competitiveness. Paradoxically, author advocacy groups also oppose the class action, arguing individual copyright ownership questions are too complex for class treatment and that many authors may never learn about the lawsuit, potentially forcing inadequate settlements while leaving fundamental AI training legality questions unresolved. (Ars Technica)
Parents Avoid Posting Children’s Photos Due to AI Deepfake Threats
A growing number of parents are joining the “never-post” movement, refusing to share children’s photos on social media due to AI-powered “nudifier” apps that can generate deepfake nudes from any image. These cheap, easily accessible tools are being widely used in schools, causing trauma for victims despite new federal laws criminalizing nonconsensual fake nudes. The apps generate roughly $36 million annually, with some charging just 8 cents per fake image. Beyond deepfakes, sharing children’s photos risks identity theft, as birthday party images can reveal birth dates useful for fraud. Child identity theft surged 40% from 2021-2024, affecting 1.1 million children yearly. While private accounts offer some protection, perpetrators often know victims personally. The author advocates for encrypted messaging and private photo albums as safer alternatives, acknowledging this may be futile once children control their own social media presence. (New York Times)
Former Google Employees Launch AI Tool for Viral Video Creation
OpenArt, founded by two former Google employees in 2022, has launched a “one-click story” feature that transforms single sentences, scripts, or songs into one-minute videos featuring wild characters popular in “brain rot” content. The platform aggregates over 50 AI models including DALLE-3 and GPT, offering three templates: Character Vlog, Music Video, or Explainer. With 3 million monthly users, the service aims to lower barriers for AI content creation. However, the platform faces intellectual property concerns as it offers copyrighted characters like Pikachu and SpongeBob, though CEO Coco Mao says they try to prevent IP infringement and are open to licensing discussions. Pricing ranges from $14-56 monthly based on credit systems. The company has raised $5 million in funding and projects over $20 million in annual revenue while maintaining positive cash flow. (Tech Crunch)
Visa Restrictions Threaten International Research Careers and Scientific Collaboration
International researchers face mounting visa challenges amid increasing anti-immigration sentiment in major education destinations. The Trump administration’s travel bans affecting 19 countries and visa processing delays threaten to reduce US international student arrivals by 150,000 this fall, potentially costing $7 billion in economic impact. Iranian biogeochemist Fatemeh Ajallooeian’s Harvard fellowship was terminated due to new restrictions, exemplifying career disruptions. Canada and Australia have imposed student caps, leading to 32% and 40% declines respectively in international applications. A Harvard survey revealed international postdocs spend over a month and $2,000-5,000 on visa renewals, causing anxiety and work disruptions. Scientists warn these policies will halt global research collaboration, particularly impacting US-China partnerships crucial for physical and natural sciences. Some countries like France and Germany are streamlining visa processes to attract skilled researchers as traditional destinations become less accessible. (Nature)
Chinese Authorities Face Backlash Over Nighttime Blood Tests in Chikungunya Outbreak
China’s strict disease control measures are sparking public outrage as authorities combat a Chikungunya virus outbreak in Guangdong province. A viral social media video showed police and health officials entering a single mother’s home at night, taking blood samples from her children without her presence or consent while she worked a night shift. The family was identified after a pharmacy reported the son’s fever purchase to authorities, part of mandatory drug sale reporting requirements reminiscent of zero-Covid surveillance. The Chikungunya outbreak, starting in Foshan city a month ago, has infected about 8,000 people and reached Hong Kong. While the mosquito-borne disease causes fever and joint pain but is rarely fatal, officials have activated strict control measures including mosquito eradication and public mobilization. The incident’s hashtag garnered nearly 90 million views on Weibo, with users expressing alarm over authorities’ invasive tactics. (The Guardian)
Coffee Study Finds Minimal Toxin Levels with Some Packaging Concerns
A comprehensive investigation by the Clean Label Project tested 45 coffee brands for contaminants and found most caffeinated coffees are generally safe, with toxin levels well below European Union safety limits. However, testing revealed traces of glyphosate herbicide and its byproduct AMPA, which has been linked to hormone disruption and neurotoxic effects. Surprisingly, all 12 organic coffee samples contained AMPA, possibly due to contamination from neighboring conventional farms. The study found higher phthalate levels in canned coffee compared to pods and bags, suggesting packaging as a contamination source. All samples contained acrylamide, with medium roasts showing highest levels, while dark and light roasts had lower concentrations. Despite these findings, researchers emphasized coffee remains one of the cleanest product categories tested, recommending consumers choose darker or lighter roasts in bags or pods while considering growing regions for heavy metal content. (CNN)
Ancient Turkish Site Becomes Battleground Between Archaeology and Conspiracy Theories
Gobekli Tepe, a 12,000-year-old archaeological site in Turkey featuring T-shaped limestone pillars carved with animals and human figures, has become the center of conspiracy theories popularized by figures like Graham Hancock and Joe Rogan podcast guests. While German archaeologist Klaus Schmidt originally called it “the world’s oldest temple,” current interpretations suggest it was a ceremonial gathering site for early communities. Conspiracy theorists like Jimmy Corsetti accuse archaeologists of deliberately slowing excavations to hide discoveries, gaining mainstream attention through Rogan’s platform. Lead archaeologist Lee Clare defends the methodical approach, explaining that careful layer-by-layer excavation preserves irreplaceable historical information for future generations. Only a small percentage has been excavated since the 1990s. Clare, who has deleted social media accounts due to harassment, warns that conspiracy narratives risk drowning out legitimate scientific research into humanity’s earliest storytelling evidence. (NPR)
IMAGE CREDIT: Harrison Haines





Leave a Reply