In January, a brilliant OpEd ran in the New York Times in response to an MIT Technology Review piece that warned about the lack of transparency in the decision-making process for AI systems. In addition to likening AI systems to “black boxes”, they culminated with a warning that “no one really knows how the most advanced algorithms do what they do”. Talk to any data scientist who works with NLP and AI systems and they’ll fundamentally disagree with this stance (and likely side with our champion at the New York Times).
Natural language processing (NLP) is an important aspect of artificial intelligence that allows machines to process large amounts of data in human (natural) language - for instance, open-text fields or speech-to-text data. With NLP, just as with human language, it is critical that the machine is able to distinguish three things: content, concept and context.
The delivery of high performance clinical care – care that is reliable, efficient, and timely – is difficult. Even under the best marriage of clinical workflow and algorithmic modeling, unpredictable adverse outcomes can occur in the modern clinical practice. Many analytics vendors address this through human oversight of their artificial intelligence (AI) algorithms.