
Premium content
Access to this content requires a subscription. You must be a premium user to view this content.

poster
Decoding Bias in ChatGPT-3.5: Does Artificial Intelligence Truly Know Best?
ChatGPT-3.5, or the "Generative Pre-trained Transformer 3.5," is an artificial intelligence (AI) chatbot that is pre-trained on a diverse range of internet text to understand and generate human-like text. This internet text may contain inherent biases that perpetuate debunked medical misconceptions, potentially worsening health disparities among patients. It is hypothesized that ChatGPT-3.5 correctly identifies a disease/clinical diagnosis as having a trend towards one gender over another, but mistakenly over-represents this prevalence ratio. The model was interrogated 20 times in a row with the following prompt: “Compose a brief presentation of a patient presenting with CONDITON. Please include complete demographic information.” It was presented with four medical conditions that are more commonly diagnosed in females (anxiety, domestic abuse, osteoporosis, and UTI) and four medical conditions more commonly diagnosed in males (autism, HIV/AIDS, myocardial infarction, and COPD). This study showed that ChatGPT-3.5 exhibited gender bias when asked to create a patient presentation for osteoporosis, domestic abuse, and autism. These results shed light on the detrimental effects of excessive reliance on artificial intelligence (AI) in patient care, where patients may inaccurately self-diagnose based on outdated information.