Whilst most medical conditions get better without medical intervention, it would be foolish for a patient to prefer ChatGPT’s advice rather than seeking something authoritative. Doctors are trained to spot rare conditions that might need urgent medical attention. “ChatGPT has no medical quality control or accountability and LLMs are known to invent convincing answers that are untrue. “From the examples of answers shown in the paper, the doctors gave succinct advice, whereas ChatGPT’s answers were similar to what you would find from a search engine selection of websites, but without the quality control that you would get by selecting (say) an NHS website. This was not a randomised controlled trial. Their results should not be assumed to apply to other questions, asked differently or evaluated differently. Neither the doctor nor GPT had access to the patient’s medical history or further context. “As the authors explicitly recognise, they looked at a very small sample of medical questions submitted to a public online forum and and compared replies from doctors with what ChatGPT responded. Prof Martyn Thomas, Professor of IT, Gresham College, London, and Director And Principal Consultant, Martyn Thomas Associates Limited, said: Expert reaction to study comparing physician and AI chatbot responses to patient questionsĪ study published in JAMA Internal Medicine compares physician and artificial intelligence chatbot responses to patient questions.
0 Comments
Leave a Reply. |