We consider the inference problem of reconstructing a visual stimulus from brain activity measurements (e.g. fMRI) that encode this stimulus. Recovering a complete image is complicated by the fact that neural representations are noisy, high-dimensional, and contain incomplete information about image details. Thus, reconstructions of complex images from brain activity require a strong prior. Here we propose to train generative adversarial networks (GANs) to learn a generative model of images that is conditioned on measurements of brain activity. We consider two challenges of this approach: First, given that GANs require far more data to train than is typically collected in an fMRI experiment, how do we obtain enough samples to train a GAN that is conditioned on brain activity? Secondly, how do we ensure that our generated samples are robust against noise present in fMRI data? Our strategy to surmount both of these problems centers around the creation of surrogate brain activity samples that are generated by an encoding model. We find that the generative model thus trained generalizes to real fRMI data measured during perception of images and is able to reconstruct the basic outline of the stimuli.
|Original language||English (US)|
|Title of host publication||Proceedings - 2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018|
|Publisher||Institute of Electrical and Electronics Engineers Inc.|
|Number of pages||8|
|State||Published - Jan 16 2019|
|Event||2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018 - Miyazaki, Japan|
Duration: Oct 7 2018 → Oct 10 2018
|Name||Proceedings - 2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018|
|Conference||2018 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2018|
|Period||10/7/18 → 10/10/18|
Bibliographical noteFunding Information:
ACKNOWLEDGMENT A preliminary survey of this work appeared in Ref. . The authors would like to thank Jesse Breedlove for useful help and comments on the manuscript. This work was supported by grant NIH R01 EY023384 to TN.