The Openai project, engaged in the development of publicly available projects in the field of artificial intelligence, was published by the debugger transformer debugger designed to analyze the activation of structures in Language models of machine learning when processing certain data. As in traditional debuggers at Transformer Debugger, it supports step -by -step navigation to the withdrawal of models, trace and intercept a certain activity. In general, Transformer Debugger allows you to figure out why the language model in response to a certain request displays one token instead of another or why the model pays more attention to certain tokens in the request. The code is written in Python and spreads under the license mit.
The composition includes the following components:
- mlp neurons, focuses of attention and hidden performances of car codes.
- models -library for GPT-2 linguistic models and car coders used in them, ensuring the substitution of processors to intercept activation.
- examples of data sets for MLP neurons, focus of attention and hidden representations .