2084: KAT Knowledge
Knowledge Augmented Transformers that use both "common sense" and given knowledge to answer
ChatGPT is cool, but it merely reacts to what we give it, and the two different types of prompt, namely “querying” and “giving information” are tied together. Introducing KAT, a Knowledge Based Transformer, which can answer a given question using explicitly given knowledge, using both the knowledge given and the implicit knowledge needed to answer a lot of questions.
This uses a frozen GPT-3 model to provide a “commonsense” input, conditioned on the question and the image, along with a seperate few encoder layers to encode the explicit knowledge given. Details are in the paper. It’s pretty cool though, and having a more explicit split between question and knowledge might be helpful in the future, to provide a cleaner way to interface with various powerful models.