https://colab.research.google.com/drive/1mhcgx2SU3GrDq3pMZp_-JPtE_fO-7kGg#scrollTo=7GpmMGjpioid

simple_analogies_circuits.ipynb :

small and large: number seq and simple analogy using in-context learning. Use diff ways to corrupt and format inputs. Test on logit diff, attn pats, actv patch. based on exploratory demo

GPT-2-small prompts:

GPT-2-large prompts:


https://colab.research.google.com/drive/1aOEeY4roW8oWqkZ0MuuZRJXmJGDRNcbr

simple_analogies_circuits, pt2.ipynb

Checks if GPT-2-Large and GPT-2-xl and GPT-Neo-2.7b can correctly complete analogies. Does preliminary logit lens, activation patching and attention head analysis on a few input formats. Also checks why these billion parameter models have trouble finding who “is” or who “has” given a previous statement from in-context learning. Prompts include:

Try: