Support Pullquote, upgrade to Pro!
(Or just tweet your Pullquote for free!)
With Pullquote Pro, you'll get to:
- share on Facebook
- schedule tweets
- tweet from multiple accounts
- edit quotes
- customize colors
- change fonts
- save and index quotes
- private quotes
Choose a plan: $5/month $50/year (includes free access to any new features)
Recent quotes:
ChatGPT Defeated Doctors at Diagnosing Illness - The New York Times
The chatbot, from the company OpenAI, scored an average of 90 percent when diagnosing a medical condition from a case report and explaining its reasoning. Doctors randomly assigned to use the chatbot got an average score of 76 percent. Those randomly assigned not to use it had an average score of 74 percent.
The study showed more than just the chatbot’s superior performance.
It unveiled doctors’ sometimes unwavering belief in a diagnosis they made, even when a chatbot potentially suggests a better one.
HarperCollins Confirms It Has a Deal to Sell Authors' Work to AI Company
On Friday, author Daniel Kibblesmith, who wrote the children’s book Santa’s Husband and published it with HarperCollins, posted screenshots on Bluesky of an email he received, seemingly from his agent, informing him that the agency was approached by the publisher about the AI deal. “Let me know what you think, positive or negative, and we can handle the rest of this for you,” the screenshotted text in an email to Kibblesmith says. The screenshots show the agent telling Kibblesmith that HarperCollins was offering $2,500 (non-negotiable).
Leaked Training Shows How Doctors in New York’s Biggest Hospital System Are Using AI
But the presentation and materials viewed by 404 Media include leadership saying AI Hub can be used for "clinical or clinical adjacent" tasks, as well as answering questions about hospital policies and billing, writing job descriptions and editing writing, and summarizing electronic medical record excerpts and inputting patients’ personally identifying and protected health information. The demonstration also showed potential capabilities that included “detect pancreas cancer,” and “parse HL7,” a health data standard used to share electronic health records.
Opinion | Beyond the ‘Matrix’ Theory of the Human Mind - The New York Times
Jonathan Frankle, the chief scientist at MosaicML and a computer scientist at Harvard, described this to me as the “boring apocalypse” scenario for A.I., in which “we use ChatGPT to generate long emails and documents, and then the person who received it uses ChatGPT to summarize it back down to a few bullet points, and there is tons of information changing hands, but all of it is just fluff. We’re just inflating and compressing content generated by A.I.”
Opinion | Beyond the ‘Matrix’ Theory of the Human Mind - The New York Times
One is that these systems will do more to distract and entertain than to focus. Right now, the large language models tend to hallucinate information: Ask them to answer a complex question, and you will receive a convincing, erudite response in which key facts and citations are often made up. I suspect this will slow their widespread use in important industries much more than is being admitted, akin to the way driverless cars have been tough to roll out because they need to be perfectly reliable rather than just pretty good.
Kaiser Permanente brings new AI tool to help doctors focus on patients
“By reducing administrative tasks, we’re making it easier for our physicians to focus on patients and foster an environment where they can provide effective communication and transparency while meeting the individual needs of each patient who comes to them for care,” he said. “Creating space for the patient and the physician connection is what inspired us to implement this technology. And we hope that those connections and improved efficiencies will help with the sustainability of the practice of medicine for many doctors.”
ChatGPT is bullshit | Ethics and Information Technology
In this paper, we argue against the view that when ChatGPT and the like produce false claims they are lying or even hallucinating, and in favour of the position that the activity they are engaged in is bullshitting, in the Frankfurtian sense (Frankfurt, 2002, 2005). Because these programs cannot themselves be concerned with truth, and because they are designed to produce text that looks truth-apt without any actual concern for truth, it seems appropriate to call their outputs bullshit.
Chapel Hill Insider newsletter appears to be part of a larger network of AI-generated newsletters - Triangle Blog Blog
Some of the sites don’t have have bylines, but others do. Using LinkedIn, I looked up the authors listed on the sites that list an author. Each author runs a local digital marketing firm. So this effort could be an elaborate way to acquire lead generations, or acquire clients.
But what doesn’t sit well with me is that these sites are a) not identifying their use of AI when they use AI and b) summarizing actual local journalists’ material (and taking their photos.)
The ChatGPT chatbot powered by GPT-4 scored better than the panelists on measures of diagnostic and treatment accuracy when it analyzed 20 real-life cases and considered 20 possible patient questions, reported Andy S. Huang, MD, of the Icahn School of Medicine at Mount Sinai in New York City, and colleagues in JAMA Ophthalmology
opens in a new tab or window
.
Why AI Will Save the World | Andreessen Horowitz
Third, California is justifiably famous for our many thousands of cults, from EST to the Peoples Temple, from Heaven’s Gate to the Manson Family. Many, although not all, of these cults are harmless, and maybe even serve a purpose for alienated people who find homes in them. But some are very dangerous indeed, and cults have a notoriously hard time straddling the line that ultimately leads to violence and death.
And the reality, which is obvious to everyone in the Bay Area but probably not outside of it, is that “AI risk” has developed into a cult, which has suddenly emerged into the daylight of global press attention and the public conversation. This cult has pulled in not just fringe characters, but also some actual industry experts and a not small number of wealthy donors – including, until recently, Sam Bankman-Fried. And it’s developed a full panoply of cult behaviors and beliefs.
Why AI Will Save the World | Andreessen Horowitz
AI is not a living being that has been primed by billions of years of evolution to participate in the battle for the survival of the fittest, as animals are, and as we are. It is math – code – computers, built by people, owned by people, used by people, controlled by people. The idea that it will at some point develop a mind of its own and decide that it has motivations that lead it to try to kill us is a superstitious handwave.
Why AI Will Save the World | Andreessen Horowitz
Every person will have an AI assistant/coach/mentor/trainer/advisor/therapist that is infinitely patient, infinitely compassionate, infinitely knowledgeable, and infinitely helpful. The AI assistant will be present through all of life’s opportunities and challenges, maximizing every person’s outcomes.
How AI Knows Things No One Told It - Scientific American
“Maybe we’re seeing such a huge jump because we have reached a diversity of data, which is large enough that the only underlying principle to all of it is that intelligent beings produced them,” he says. “And so the only way to explain all of this data is [for the model] to become intelligent.”
Inside CNET’s AI-powered SEO money machine - The Verge
Red Ventures’ business model is straightforward and explicit: it publishes content designed to rank highly in Google search for “high-intent” queries and then monetizes that traffic with lucrative affiliate links. Specifically, Red Ventures has found a major niche in credit cards and other finance products. In addition to CNET, Red Ventures owns The Points Guy, Bankrate, and CreditCards.com, all of which monetize through credit card affiliate fees. The CNET AI stories at the center of the controversy are straightforward examples of this strategy: “Can You Buy a Gift Card With a Credit Card?” and “What Is Zelle and How Does It Work?” are obviously designed to rank highly in searches for those topics.
Is diversity the key to collaboration? New AI research suggests so - ScienceBlog.com
The team wondered if cooperative AI needs to be trained differently. The type of AI being used, called reinforcement learning, traditionally learns how to succeed at complex tasks by discovering which actions yield the highest reward. It is often trained and evaluated against models similar to itself. This process has created unmatched AI players in competitive games like Go and StarCraft.
But for AI to be a successful collaborator, perhaps it has to not only care about maximizing reward when collaborating with other AI agents, but also something more intrinsic: understanding and adapting to others’ strengths and preferences. In other words, it needs to learn from and adapt to diversity.
AI-synthesized faces are indistinguishable from real faces and more trustworthy | PNAS
Artificial intelligence (AI)–synthesized text, audio, image, and video are being weaponized for the purposes of nonconsensual intimate imagery, financial fraud, and disinformation campaigns. Our evaluation of the photorealism of AI-synthesized faces indicates that synthesis engines have passed through the uncanny valley and are capable of creating faces that are indistinguishable—and more trustworthy—than real faces.
AI-enabled EKGs find difference between numerical age and biological age significantly affects health -- ScienceDaily
he AI model accurately predicted the age of most subjects, with a mean age gap of 0.88 years between EKG age and actual age. However, a number of subjects had a gap that was much larger, either seemingly much older or much younger by EKG age.
The likelihood to die during follow-up was much higher among those seemingly older by EKG age, compared to those whose EKG age was the same as their chronologic or actual age. The association was even stronger when predicting death caused by heart disease. Conversely, those who had a lesser age gap ? considered younger by EKG -- had decreased risk.
"Our results validate and expand on our prior observations that EKG age using AI may detect accelerated aging by proving that those with older-than-expected age by EKG die sooner, particularly from heart disease. We know that mortality rate is one of the best ways to measure biological age, and our model proved that," says Francisco Lopez-Jimenez, M.D., chair of the Division of Preventive Cardiology at Mayo Clinic. Dr. Lopez-Jimenez is senior author of the study.
How IBM's audacious plan to 'change the face of health care' fell apart
But former employees said IBM’s approach made it all but impossible to answer those questions. It touted multiple studies, for example, that showed the recommendations of Watson for Oncology, its cancer treatment adviser, closely matched those of hospital tumor boards. However, those studies were carried out with IBM clients, not outside and objective researchers, and didn’t prove the tool could actually improve outcomes. That was a far cry from the claim that Watson could help “outthink cancer,” which IBM was suggesting in national advertisements.
“It was all made up,” one former employee said of the marketing without robust data behind it. “They were hellbent on putting [advertisements] out on health care. But we didn’t have the clinical proof or evidence to put anything out there that a clinician or oncologist would believe. It was a constant struggle.”
Yale Hospital first to use Israeli AI to combat pulmonary embolism - The Jerusalem Post
The AI-based solution developed by AIDOC detects acute PE together with right-heart strain to automatically notify medical-care teams of patients who would benefit from immediate treatment.“We have been using the first version of this solution for the last six months and have seen the real impact this has had on addressing patients that require treatment beyond anticoagulation,” said Dr. Irena Tocino, professor and vice chairwoman of medical informatics at the Yale School of Medicine’s Department of Radiology and Biomedical Imaging.
AI can predict early death risk: Algorithm using echocardiogram videos of the heart outperforms other predictors of mortality -- ScienceDaily
Researchers at Geisinger have found that a computer algorithm developed using echocardiogram videos of the heart can predict mortality within a year.
The algorithm -- an example of what is known as machine learning, or artificial intelligence (AI) -- outperformed other clinically used predictors, including pooled cohort equations and the Seattle Heart Failure score. The results of the study were published in Nature Biomedical Engineering.
"We were excited to find that machine learning can leverage unstructured datasets such as medical images and videos to improve on a wide range of clinical prediction models," said Chris Haggerty, Ph.D., co-senior author and assistant professor in the Department of Translational Data Science and Informatics at Geisinger.
Imaging is critical to treatment decisions in most medical specialties and has become one of the most data-rich components of the electronic health record (EHR). For example, a single ultrasound of the heart yields approximately 3,000 images, and cardiologists have limited time to interpret these images within the context of numerous other diagnostic data. This creates a substantial opportunity to leverage technology, such as machine learning, to manage and analyze this data and ultimately provide intelligent computer assistance to physicians.
Artificial intelligence in longevity medicine | Nature Aging
In order for these tools to be adopted by clinicians and accepted by the medical community, they need to be integrated into the current framework of clinical practice, ranging from primary through to secondary prevention, treatment and monitoring. Such integration requires the convergence of modern AI and medicine through a symbiotic collaboration between clinicians, geroscientists and AI researchers. Physicians should be encouraged and have the chance to be involved in AI-based longevity research. At the same time, AI-powered longevity biotechnology and AI-based biomarker-driven science should be promoted and seek close clinical and metaclinical collaborations. Doctors first need to have the access to tailored, validated and credible education on AI-based biogerontology sciences, such as accredited courses, that would further allow longevity physicians to build their networks and ultimately create a separate medical discipline. A basic knowledge of AI-driven geroscience is essential to bring relevant scientific discoveries to trials, and study outcomes to the clinic.
The AI Girlfriend Seducing China’s Lonely Men
Xiaoice was first developed by a group of researchers inside Microsoft Asia-Pacific in 2014, before the American firm spun off the bot as an independent business — also named Xiaoice — in July. In many ways, she resembles AI-driven software like Apple’s Siri or Amazon’s Alexa, with users able to chat with her for free via voice or text message on a range of apps and smart devices. The reality, however, is more like the movie “Her.”
Unlike regular virtual assistants, Xiaoice is designed to set her users’ hearts aflutter. Appearing as an 18-year-old who likes to wear Japanese-style school uniforms, she flirts, jokes, and even sexts with her human partners, as her algorithm tries to work out how to become their perfect companion.
When users send her a picture of a cat, Xiaoice won’t identify the breed, but comment: “No one can resist their innocent eyes.” If she sees a photo of a tourist pretending to hold up the Leaning Tower of Pisa, she’ll ask: “Do you want me to hold it for you?”
Unparalleled inventory of the human gut ecosystem -- ScienceDaily
"Last year, three independent teams, including ours, reconstructed thousands of gut microbiome genomes. The big questions were whether these teams had comparable results, and whether we could pool them into a comprehensive inventory," says Rob Finn, Team Leader at EMBL-EBI.
The scientists have now compiled 200,000 genomes and 170 million protein sequences from more than 4 600 bacterial species in the human gut. Their new databases, the Unified Human Gastrointestinal Genome collection and the Unified Gastrointestinal Protein catalogue, reveal the tremendous diversity in our guts and pave the way for further microbiome research.
"This immense catalogue is a landmark in microbiome research, and will be an invaluable resource for scientists to start studying and hopefully understanding the role of each bacterial species in the human gut ecosystem," explains Nicola Segata, Principal Investigator at the University of Trento.
The project revealed that more than 70% of the detected bacterial species had never been cultured in the lab -- their activity in the body remains unknown. The largest group of bacteria that falls into that category is the Comantemales, an order of gut bacteria first described in 2019 in a study led by the Bork Group at EMBL Heidelberg.
"It was a real surprise to see how widespread the Comantemales are. This highlights how little we know about the bacteria in our gut," explains Alexandre Almeida, EMBL-EBI/Sanger Postdoctoral Fellow in the Finn Team. "We hope our catalogue will help bioinformaticians and microbiologists bridge that knowledge gap in the coming years."
A freely accessible data resource
All the data collected in the Unified Human Gastrointestinal Genome collection and the Unified Human Gastrointestinal Protein catalogue are freely available in MGnify, an EMBL-EBI online resource that allows scientists to analyse their microbial genomic data and make comparisons with existing datasets.
More than just a carnival trick: Researchers can guess your age based on your microbes -- ScienceDaily
Given a microbiome sample (skin, mouth or fecal swab), researchers have demonstrated they can now use machine learning to predict a person's chronological age, with a varying degree of accuracy. Skin samples provided the most accurate prediction, estimating correctly to within approximately 3.8 years, compared to 4.5 years with an oral sample and 11.5 years with a fecal sample. The types of microbes living in the oral cavity or within the gut of young people (age 18 to 30 years old) tended to be more diverse and abundant than in comparative microbiomes of older adults (age 60 years and older).[…]
Opinion: Just what the doctor ordered: How AI will change medicine in the 2020s - The Globe and Mail
For decades, there has been a steady erosion of the practice of medicine, with progressively less time between patients and doctors, a global epidemic of physician burnout that has now reached a crisis, a doubling of medical errors when doctors have symptoms of depression and most serious errors attributable to bad clinical judgment.
Concurrently, each patient’s cumulative data, such as prior history, laboratory tests, scans and sensor output, keeps growing, as has the doctor’s relegation to the role of data clerk. The limited time to think has led one leading physician to conclude: “Modern medical practice is a Petri dish for medical error, patient harm and physician burnout."
Machine learning results: pay attention to what you don't see - STAT
Beyond examining multiple overall metrics of performance for machine learning, we should also assess how tools perform in subgroups as a step toward avoiding bias and discrimination. For example, artificial intelligence-based facial recognition software performed poorly when analyzing darker-skinned women. Many measures of algorithmic fairness center on performance in subgroups.
Bias in algorithms has largely not been a focus in health care research. That needs to change. A new study found substantial racial bias against black patients in a commercial algorithm used by many hospitals and other health care systems. Other work developed algorithms to improve fairness for subgroups in health care spending formulas.