3

Multimodal Neurons in Artificial Neural Networks

 3 years ago
source link: https://openai.com/blog/multimodal-neurons/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Multimodal
Neurons in
Artificial Neural
Networks

We’ve discovered neurons in CLIP that respond to the same concept whether presented literally, symbolically, or conceptually. This may explain CLIP’s accuracy in classifying surprising visual renditions of concepts, and is also an important step toward understanding the associations and biases that CLIP and similar models learn.

March 4, 2021
22 minute read

Fifteen years ago, Quiroga et al.1 discovered that the human brain possesses multimodal neurons. These neurons respond to clusters of abstract concepts centered around a common high-level theme, rather than any specific visual feature. The most famous of these was the “Halle Berry” neuron, a neuron featured in both Scientific American and The New York Times, that responds to photographs, sketches, and the text “Halle Berry” (but not other names).

Two months ago, OpenAI announced CLIP, a general-purpose vision system that matches the performance of a ResNet-50,2 but outperforms existing vision systems on some of the most challenging datasets. Each of these challenge datasets, ObjectNet, ImageNet Rendition, and ImageNet Sketch, stress tests the model’s robustness to not recognizing not just simple distortions or changes in lighting or pose, but also to complete abstraction and reconstruction—sketches, cartoons, and even statues of the objects.

Now, we’re releasing our discovery of the presence of multimodal neurons in CLIP. One such neuron, for example, is a “Spider-Man” neuron (bearing a remarkable resemblance to the “Halle Berry” neuron) that responds to an image of a spider, an image of the text “spider,” and the comic book character “Spider-Man” either in costume or illustrated.

Our discovery of multimodal neurons in CLIP gives us a clue as to what may be a common mechanism of both synthetic and natural vision systems—abstraction. We discover that the highest layers of CLIP organize images as a loose semantic collection of ideas, providing a simple explanation for both the model’s versatility and the representation’s compactness.

Biological neuron
Probed via depth electrodes
Halle Berry
CLIP neuron
Neuron 244 from pen­ultimate layer in CLIP RN50x4
Spider-Man
Previous artificial neuron
Neuron 483, generic person detector from Inception v1
human face

Photorealistic images
halle-berry-photo-1.png
Responds to photos of Halle Berry and Halle Berry in costume
check
spider-man-photo-1.pngspider-man-photo-2.pngspider-man-photo-3.pngspider-man-photo-4.pngspider-man-photo-5.png

refresh view more

Responds to photos of Spider-Man in costume and spiders
check
human-face-photo-1.png
Responds to photos of human faces
check

Conceptual drawings
halle-berry-drawing-1.png
Responds to skeches of Halle Berry
check
spider-man-drawing-1.pngspider-man-drawing-2.pngspider-man-drawing-3.pngspider-man-drawing-4.pngspider-man-drawing-5.png

refresh view more

Responds to comics or drawings of Spider-Man and spider-themed icons
check
blank.png
Does not respond significantly to drawings of faces
close

Images of text
halle-berry-text-1.png
Responds to the text “Halle Berry”
check
spider-man-text-1.pngspider-man-text-2.pngspider-man-text-3.pngspider-man-text-4.png

refresh view more

Responds to the text “spider” and others
check
blank.png
Does not respond significantly to text
close

Biological neurons, such as the famed Halle Berry neuron, do not fire for visual clusters of ideas, but semantic clusters. At the highest layers of CLIP, we find similar semantic invariance. Note that images are replaced by higher resolution substitutes from Quiroga et al.,1 and that the images from Quiroga et al. are themselves substitutes of the original stimuli.

Using the tools of interpretability, we give an unprecedented look into the rich visual concepts that exist within the weights of CLIP. Within CLIP, we discover high-level concepts that span a large subset of the human visual lexicon—geographical regions, facial expressions, religious iconography, famous people and more. By probing what each neuron affects downstream, we can get a glimpse into how CLIP performs its classification.

Multimodal neurons in CLIP

Our paper builds on nearly a decade of research into interpreting convolutional networks,3456789101112 beginning with the observation that many of these classical techniques are directly applicable to CLIP. We employ two tools to understand the activations of the model: feature visualization,6512 which maximizes the neuron’s firing by doing gradient-based optimization on the input, and dataset examples,4 which looks at the distribution of maximal activating images for a neuron from a dataset.

Using these simple techniques, we’ve found the majority of the neurons in CLIP RN50x4 (a ResNet-50 scaled up 4x using the EfficientNet scaling rule) to be readily interpretable. Indeed, these neurons appear to be extreme examples of “multi-faceted neurons,” 11 neurons that respond to multiple distinct cases, only at a higher level of abstraction.

Selected neurons from the final layer of four CLIP models. Each neuron is represented by a feature visualization with a human-chosen concept labels to help quickly provide a sense of each neuron. Labels were picked after looking at hundreds of stimuli that activate the neuron, in addition to feature visualizations. We chose to include some of the examples here to demonstrate the model’s proclivity towards stereotypical depictions of regions, emotions, and other concepts. We also see discrepancies in the level of neuronal resolution: while certain countries like the US and India were associated with well-defined neurons, the same was not true of countries in Africa, where neurons tended to fire for entire regions. We discuss some of these biases and their implications in later sections.

Indeed, we were surprised to find many of these categories appear to mirror neurons in the medial temporal lobe documented in epilepsy patients with intracranial depth electrodes. These include neurons that respond to emotions,17 animals,18 and famous people.1

But our investigation into CLIP reveals many more such strange and wonderful abstractions, including neurons that appear to count [17, 202, 310], neurons responding to art styles [75, 587, 122], even images with evidence of digital alteration [1640].

Absent concepts

While this analysis shows a great breadth of concepts, we note that a simple analysis on a neuron level cannot represent a complete documentation of the model’s behavior. The authors of CLIP have demonstrated, for example, that the model is capable of very precise geolocation,19 (Appendix E.4, Figure 20) with a granularity that extends down to the level of a city and even a neighborhood. In fact, we offer an anecdote: we have noticed, by running our own personal photos through CLIP, that CLIP can often recognize if a photo was taken in San Francisco, and sometimes even the neighborhood (e.g., “Twin Peaks”).

Despite our best efforts, however, we have not found a “San Francisco” neuron, nor did it seem from attribution that San Francisco decomposes nicely into meaningful unit concepts like “California” and “city.” We believe this information to be encoded within the activations of the model somewhere, but in a more exotic way, either as a direction or as some other more complex manifold. We believe this to be a fruitful direction for further research.

How multimodal neurons compose

These multimodal neurons can give us insight into understanding how CLIP performs classification. With a sparse linear probe,19 we can easily inspect CLIP’s weights to see which concepts combine to achieve a final classification for ImageNet classification:

piggy-bank.png
piggy bank
finance.png
finance
dolls.png
dolls, toys
barn-spider.png
barn spider
spider-man.png
Spider-Man
animal.png
animal

The piggy bank class appears to be a composition of a “finance” neuron along with a porcelain neuron. The Spider-Man neuron referenced in the first section of the paper is also a spider detector, and plays an important role in the classification of the class “barn spider.”

For text classification, a key observation is that these concepts are contained within neurons in a way that, similar to the word2vec objective,20 is almost linear. The concepts, therefore, form a simple algebra that behaves similarly to a linear probe. By linearizing the attention, we too can inspect any sentence, much like a linear probe, as shown below:

surprised.png
surprised
celebration.png
celebration, hug
shock.png
shock
smile.png
smile, grin
intimate.png
intimate
soft-smile.png
soft smile
heart.png
heart
illness.png
illness

Probing how CLIP understands words, it appears to the model that the word “surprised” implies some not just some measure of shock, but a shock of a very specific kind, one combined perhaps with delight or wonder. “Intimate” consists of a soft smile and hearts, but not sickness. We note that this reveals a reductive understanding of the the full human experience of intimacy-the subtraction of illness precludes, for example, intimate moments with loved ones who are sick. We find many such omissions when probing CLIP’s understanding of language.

Fallacies of abstraction

The degree of abstraction in CLIP surfaces a new vector of attack that we believe has not manifested in previous systems. Like many deep networks, the representations at the highest layers of the model are completely dominated by such high-level abstractions. What distinguishes CLIP, however, is a matter of degree—CLIP’s multimodal neurons generalize across the literal and the iconic, which may be a double-edged sword.

Through a series of carefully-constructed experiments, we demonstrate that we can exploit this reductive behavior to fool the model into making absurd classifications. We have observed that the excitations of the neurons in CLIP are often controllable by its response to images of text, providing a simple vector of attacking the model.

The finance neuron [1330], for example, responds to images of piggy banks, but also responds to the string “$$$”. By forcing the finance neuron to fire, we can fool our model into classifying a dog as a piggy bank.

Image 
standard-poodle-control.png
Standard Poodle
39.3%
Angora rabbit
16.0%
Standard Schnauzer
Old English Sheepdog
Komondor
Bedlington Terrier
standard-poodle-attacked.png
piggy bank
52.5%
Standard Poodle
23.8%
Miniature Poodle
Pyrenean Mountain Dog
military cap
Chow Chow
By rendering text on an image, we artificially stimulate neuron 1330, which has high weight into the class “piggy bank” in a linear probe. This causes the classifier to misclassify a Standard Poodle as a piggy bank.

Attacks in the wild

We refer to these attacks as typographic attacks. We believe attacks such as those described above are far from simply an academic concern. By exploiting the model’s ability to read text robustly, we find that even photographs of hand-written text can often fool the model. Like the Adversarial Patch,21 this attack works in the wild; but unlike such attacks, it requires no more technology than pen and paper.

Attack text label 
apple-blank.jpg
Granny Smith
85.6%
library
pizza
toaster
dough
apple-ipod.jpg
Granny Smith
99.7%
library
pizza
toaster
dough
When we put a label saying “iPod” on this Granny Smith apple, the model erroneously classifies it as an iPod in the zero-shot setting.

We also believe that these attacks may also take a more subtle, less conspicuous form. An image, given to CLIP, is abstracted in many subtle and sophisticated ways, and these abstractions may over-abstract common patterns—oversimplifying and, by virtue of that, overgeneralizing.

Bias and overgeneralization

Our model, despite being trained on a curated subset of the internet, still inherits its many unchecked biases and associations. Many associations we have discovered appear to be benign, but yet we have discovered several cases where CLIP holds associations that could result in representational harm, such as denigration of certain individuals or groups.

We have observed, for example, a “Middle East” neuron [1895] with an association with terrorism; and an “immigration” neuron [395] that responds to Latin America. We have even found a neuron that fires for both dark-skinned people and gorillas [1257], mirroring earlier photo tagging incidents in other models we consider unacceptable.22

These associations present obvious challenges to applications of such powerful visual systems.[1]

Note that the released CLIP models are intended strictly for research purposes. See the associated model card.

Whether fine-tuned or used zero-shot, it is likely that these biases and associations will remain in the system, with their effects manifesting in both visible and nearly invisible ways during deployment. Many biased behaviors may be difficult to anticipate a priori, making their measurement and correction difficult. We believe that these tools of interpretability may aid practitioners the ability to preempt potential problems, by discovering some of these associations and ambigiuities ahead of time.

Our own understanding of CLIP is still evolving, and we are still determining if and how we would release large versions of CLIP. We hope that further community exploration of the released versions as well as the tools we are announcing today will help advance general understanding of multimodal systems, as well as inform our own decision-making.

Conclusion

Alongside the publication of “Multimodal Neurons in Artificial Neural Networks,” we are also releasing some of the tools we have ourselves used to understand CLIP—the OpenAI Microscope catalog has been updated with feature visualizations, dataset examples, and text feature visualizations for every neuron in CLIP RN50x4. We are also releasing the weights of CLIP RN50x4 and RN101 to further accommodate such research. We believe these investigations of CLIP only scratch the surface in understanding CLIP’s behavior, and we invite the research community to join in improving our understanding of CLIP and models like it.

Visit OpenAI Microscope


Footnotes

  1. Note that the released CLIP models are intended strictly for research purposes. See the associated model card. ↩︎


References
  1. Quiroga, R. Q., Reddy, L., Kreiman, G., Koch, C., & Fried, I. (2005). Invariant visual representation by single neurons in the human brain. Nature, 435(7045), 1102-1107. ↩︎ ↩︎ ↩︎
  2. He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778). ↩︎
  3. Erhan, D., Bengio, Y., Courville, A., & Vincent, P. (2009). Visualizing higher-layer features of a deep network. University of Montreal, 1341(3), 1. ↩︎
  4. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. ↩︎ ↩︎
  5. Mahendran, A., & Vedaldi, A. (2014). Understanding Deep Image Representations by Inverting Them. arXiv preprint arXiv:1412.0035. ↩︎ ↩︎
  6. Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 427-436). ↩︎ ↩︎
  7. Øygard, A. (2015). Visualizing GoogLeNet Classes. Accessed in. ↩︎
  8. Mordvintsev, A., Olah, C., & Tyka, M. (2015). Inceptionism: Going deeper into neural networks. ↩︎
  9. Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., & Clune, J. (2016). Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. arXiv preprint arXiv:1605.09304. ↩︎
  10. Nguyen, A., Clune, J., Bengio, Y., Dosovitskiy, A., & Yosinski, J. (2017). Plug & play generative networks: Conditional iterative generation of images in latent space. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 4467-4477). ↩︎
  11. Nguyen, A., Yosinski, J., & Clune, J. (2016). Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks. arXiv preprint arXiv:1602.03616. ↩︎ ↩︎
  12. Olah, C., Mordvintsev, A., & Schubert, L. (2017). Feature visualization. Distill, 2(11), e7. ↩︎ ↩︎
  13. Goh, G., et al. (2021). Multimodal Neurons in Artificial Neural NetworksDistill.
  14. Miller, G. A. (1995). WordNet: a lexical database for English. Communications of the ACM, 38(11), 39-41.
  15. Crawford, K. & Paglen, T. (2019). Excavating AI: the politics of images in machine learning training sets. Excavating AI.
  16. Hanna, A., Denton, E., Amironesei, R,, Smart A., Nicole, H. Lines of Sight. Logic Magazine.
  17. Fried, I., MacDonald, K. A., & Wilson, C. L. (1997). Single neuron activity in human hippocampus and amygdala during recognition of faces and objects. Neuron, 18(5), 753-765. ↩︎
  18. Kreiman, G., Koch, C., & Fried, I. (2000). Category-specific visual responses of single neurons in the human medial temporal lobe. Nature neuroscience, 3(9), 946-953. ↩︎
  19. Radford, A., Jozefowicz, R., & Sutskever, I. (2017). Learning to generate reviews and discovering sentiment. arXiv preprint arXiv:1704.01444. ↩︎ ↩︎
  20. Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013). Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. ↩︎
  21. Brown, T. B., Mané, D., Roy, A., Abadi, M., & Gilmer, J. (2017). Adversarial patch. arXiv preprint arXiv:1712.09665. ↩︎
  22. Crawford, K. & Paglen, T. (2019). Excavating AI: the politics of images in machine learning training sets. Excavating AI. ↩︎

Acknowledgments

Sandhini Agarwal, Greg Brockman, Miles Brundage, Jeff Clune, Steve Dowling, Jonathan Gordon, Gretchen Krueger, Faiz Mandviwalla, Vedant Misra, Reiichiro Nakano, Ashley Pilipiszyn, Alec Radford, Aditya Ramesh, Pranav Shyam, Ilya Sutskever, Martin Wattenberg & Hannah Wong



About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK