Researchers at Klick Labs unveiled a cutting-edge, non-invasive technique that can predict chronic high blood pressure (hypertension) with a high degree of accuracy using just a person's voice. Just published in the peer-reviewed journal IEEE Access, the findings hold tremendous potential for advancing early detection of chronic high blood pressure and showcase yet another novel way to harness vocal biomarkers for better health outcomes.

The chatbot ChatGPT performed better than trainee doctors in assessing complex cases of respiratory disease in areas such as cystic fibrosis, asthma and chest infections in a study presented at the European Respiratory Society (ERS) Congress in Vienna, Austria.

The study also showed that Google’s chatbot Bard performed better than trainees in some aspects and Microsoft’s Bing chatbot performed as well as trainees.

Researchers at the Icahn School of Medicine at Mount Sinai have developed a noninvasive technique that could dramatically improve the way doctors monitor intracranial hypertension, a condition where increased pressure in the brain can lead to severe outcomes like strokes and hemorrhages.

The new approach, driven by artificial intelligence (AI), offers a safer and faster alternative to the current gold standard of drilling into the skull.

Scientists at Harvard Medical School have designed a versatile, ChatGPT-like AI model capable of performing an array of diagnostic tasks across multiple forms of cancers.

The new AI system, described Sept. 4 in Nature, goes a step beyond many current AI approaches to cancer diagnosis, the researchers said.

Current AI systems are typically trained to perform specific tasks - such as detecting cancer presence or predicting a tumor's genetic profile - and they tend to work only in a handful of cancer types.

Researchers evaluating the performance of ChatGPT-4 Vision found that the model performed well on text-based radiology exam questions but struggled to answer image-related questions accurately. The study's results were published today in Radiology, a journal of the Radiological Society of North America (RSNA).

Chat GPT-4 Vision is the first version of the large language model that can interpret both text and images.

A machine-learning tool created by Weill Cornell Medicine and Hospital for Special Surgery (HSS) investigators can help distinguish subtypes of rheumatoid arthritis (RA), which may help scientists find ways to improve care for the complex condition.

The study published Aug. 29 in Nature Communications shows that artificial intelligence and machine learning technologies can effectively and efficiently subtype pathology samples from patients with RA.

As part of its ongoing exploration of vocal biomarkers and the role they can play in enhancing health outcomes, Klick Labs published a new study in Scientific Reports - confirming the link between blood glucose levels and voice pitch and opening the door to future advancements in non-invasive glucose monitoring for people living with Type 2 diabetes.

More Digital Health News ...

Page 37 of 257