Healthcare Chatbots Provoke Unease in AI Governance Analysts
AI Failures May Hide in Ways that Safety Tests Don't Measure
When an AI chatbot tells people to add glue to pizza, the error is obvious. When it recommends eating more bananas - sound nutritional advice that could be dangerous for someone with kidney failure - the mistake hides in plain sight.
When an AI chatbot tells people to add glue to pizza, the error is obvious. When it recommends eating more bananas - sound nutritional advice that could be dangerous for someone with kidney failure - the mistake hides in plain sight.