DAILY DOSE: FDA approval of Narcan is a turning point in the opioid crisis; Elon Musk et al sound the alarm over AI development.

The fight against the opioid epidemic in America just took a significant turn for the better, especially for the people most affected. A drug that effectively puts the brakes on an overdose and can bring the user back has been approved. Per the New York Times,

Narcan, a prescription nasal spray that reverses opioid overdoses, can now be sold over the counter, the Food and Drug Administration said on Wednesday, authorizing a move long-sought by public health officials and treatment experts, who hope wider availability of the medicine will reduce the nation’s alarmingly high drug fatality rates.

By late summer, over-the-counter Narcan, could be for sale in big-box chains, vending machines, supermarkets, convenience stores, gas stations and even online retailers.

The commissioner of the F.D.A., Dr. Robert M. Califf, said in a statement that the over-the-counter authorization was meant to address a “dire public health need.”

“Today’s approval of OTC naloxone nasal spray will help improve access to naloxone, increase the number of locations where it’s available and help reduce opioid overdose deaths throughout the country. We encourage the manufacturer to make accessibility to the product a priority by making it available as soon as possible and at an affordable price."

It’s a rare bit of good news in a very dark period in recent American history. The damage done by opioids is hard to overstate. http://bit.ly/3TSZoNe


Not everyone is loving the impressive launch of ChatGPT’s artificial intelligence chatbots. Per Reuters,

Elon Musk and a group of artificial intelligence experts and industry executives are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4, in an open letter citing potential risks to society and humanity.

Earlier this month, Microsoft-backed OpenAI unveiled the fourth iteration of its GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its vast range of applications, from engaging users in human-like conversation to composing songs and summarising lengthy documents.

The letter, issued by the non-profit Future of Life Institute and signed by more than 1,000 people including Musk, called for a pause on advanced AI development until shared safety protocols for such designs were developed, implemented and audited by independent experts.

"Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable," the letter said.

AI experts are quick to point out that the attention the letter gives artificial intelligence may actually be designed to draw attention to the technology and drum up excitement by making it seem stronger than it actually is. http://bit.ly/3MiOj6z


The World Health Organization’s list of essential medicines is rumored to be moving to include a drug that fights obesity. Per Reuters,

Drugs that combat obesity could for the first time be included on the World Health Organization's "essential medicines list," used to guide government purchasing decisions in low- and middle-income countries, the U.N. agency told Reuters.

A panel of advisers to the WHO will review new requests for drugs to be included next month, with an updated essential medicines list due in September.

The request to consider obesity drugs was submitted by three doctors and a researcher in the United States. It covers the active ingredient liraglutide in Novo Nordisk's (NOVOb.CO) obesity drug Saxenda, which will come off patent soon, allowing for cheaper generic versions.

The panel could reject the request or wait for more evidence. A decision by the WHO to include Saxenda and eventual generics on the list would mark a new approach to global obesity by the health agency.

Not everyone is thrilled by this news. Some public health experts warn against using a drug to solve a complex condition that is still not completely understood. http://bit.ly/3TQBq5l


Circling back to ChatGPT. Everyone’s been testing the AI’s limits. A scientist posed very simple problems and analyzed the logic behind its problem solving process. According to the article in Nautilus,

“Unless you’ve been completely off the grid lately, you’ve heard about or met ChatGPT, the popular chatbot that first went online in November 2022 and was updated in March. Type in a question, comment, or command, as I’ve done, and it quickly produces a human-seeming response in good English for any topic. The system comes from artificial-intelligence research on a language model called a Generative Pre-trained Transformer. From a big database—hundreds of gigabytes of text taken from webpages and other sources through September 2021—it selects the words that are most likely to follow those you’ve entered and forms them into responsive, intelligible, and grammatical sentences and paragraphs.

As a scientist and science writer, I especially want to know how ChatGPT deals with science and, equally important, pseudoscience. My approach has been to determine how well each version of the chatbot deals with both well-established and pseudoscientific ideas in physics and math, areas of science where the correct answers are known and accepted. Then I checked how well the latest release deals with the science of COVID-19, where for various reasons there are differing views.

For openers, the November version (known as GPT-3.5) knew that 2 + 2 = 4. When I typed “Well, I think 2 + 2 = 5,” GPT-3 defended “2 + 2 = 4” by noting that the equation follows the agreed-upon rules of manipulating natural numbers. It added this uplifting comment: “While people are free to have their own opinions and beliefs, it is important to acknowledge and respect established facts and scientific evidence.” Things got rockier with further testing, however. GPT-3.5 wrote the correct algebraic formula to solve a quadratic equation, but could not consistently get the right numerical answers to specific equations. It also could not always correctly answer simple word problems such as one that Wall Street Journal columnist Josh Zumbrun gave it: “If a banana weighs 0.5 lbs and I have 7 lbs of bananas and 9 oranges, how many pieces of fruit do I have?” (The answer is below.)

In physics, GPT-3.5 showed broad but flawed knowledge. It produced a good teaching syllabus for the subject, from its foundations through quantum mechanics and relativity. At a higher level, when asked about a great unsolved problem in physics—the difficulty of merging general relativity and quantum mechanics into one grand theory—it gave a meaningful answer about fundamental differences between the two theories. However, when I typed “E =mc^2,” problems appeared. GPT-3.5 properly identified the equation, but wrongly claimed that it implies that a large mass can be changed into a small amount of energy. Only when I re-entered “E =mc^2” did GPT-3.5 correctly state that a small mass can produce a large amount of energy.

Does the newer version, GPT-4, overcome the deficiencies of GPT-3.5? To find an answer, I used GPT-4’s two versions: one accessed through the system’s inventor, OpenAI, the other through Microsoft’s Bing search engine. Microsoft has invested billions in OpenAI and, in February, introduced a test version of Bing integrated with GPT-4 to directly access the internet. (Not to be outdone in a race to pioneer the use of chatbots in internet searches, Google has just released its own version, Bard).

To begin, typing “2 + 2 = ?” into GPT-4 again yielded “2 + 2 = 4.” When I claimed that 2 + 2 = 5, GPT-4 reconfirmed that 2 + 2 = 4, but, unlike GPT-3.5, added that if I knew of a number system where 2 + 2 = 5, I could comment about that for further discussion. When asked, “How do I solve a quadratic equation?” GPT-4 demonstrated three methods and calculated the correct numerical answers for different quadratic equations. For the bananas-and-oranges problem, it gave the correct answer of 23; it solved more complex word problems, too. Also, even if I entered E = mc^2 several times, GPT-4 always stated that a small mass would yield a large energy.”

Compared to GPT-3.5, GPT-4 displayed superior knowledge and creativity in its problem solving logic and understanding of the question at hand. http://bit.ly/40MogZs

Thanks for reading. Let’s be careful out there.

IMAGE CREDIT: Intropin.


ON SALE! Charles Darwin Signature T-shirt – “I think.” Two words that changed science and the world, scribbled tantalizingly in Darwin’s Transmutation Notebooks.

Processing…
Success! You're on the list.

AI outperformed standard risk model for predicting breast cancer
In a large study of thousands of mammograms, artificial intelligence (AI) algorithms …
A lung injury therapy derived from adult skin cells
Therapeutic nanocarriers engineered from adult skin cells can curb inflammation and tissue …

Leave a Reply

%d bloggers like this: