Google AI researchers have made the Language Interpretability Tool (LIT) an open source platform for visualizing, understanding and auditing natural language processing (NLP) models for third-party developers.
LIT focuses on AI models and answers deep questions about their behavior like why AI models make certain predictions or can these predictions be attributed to adversarial behaviour, or too undesirable priors in the training set.
LIT calculates and displays metrics for entire data sets to spotlight patterns in model performance.
“In LIT’s metrics table, we can slice a selection by pronoun type and by the true referent,” according to the team behind LIT.
The tool supports natural language processing tasks like classification, language modeling, and structured prediction.
“LIT works with any model that can run from Python, the Google researchers say, including TensorFlow, PyTorch, and remote models on a server,” reports VentureBeat.
Natural language processing is a subfield of linguistics, computer science, information engineering, and Artificial Intelligence concerned with the interactions between computers and human languages, in particular how to program computers to process and analyze large amounts of natural language data.
The Google LIT team said that in the near future, the toolset will gain features like counterfactual generation plug-ins, additional metrics and visualizations for sequence and structured output types, and a greater ability to customize the user interface (UI) for different applications.