Advances in explainable neural natural language processing.
Robert Schwarzenberg (German Research Center for Artificial Intelligence (DFKI), Berlin),
In recent years, we have witnessed ever-more complex neural models frequently breaking records in various domains, among them natural language processing (NLP). These state-of-the-art (SOTA) models, however, became less explainable. It was almost impossible to accurately reason about internal information flow, intermediate representations and acquired knowledge. Interestingly, the focus has recently shifted: We are now arguably witnessing the advent of neural explainability. In NLP, for instance, rather than pushing laboratory benchmarks, the most prominent conference (ACL) now puts an emphasis on “examining, analyzing, and interpreting SOTA models” to “assess how much we have accomplished in developing a machine’s ability in understanding and generating human language.” In this talk, we will explore these exciting developments in the field of explainable neural NLP.