Discriminative deep learning models have shown remarkable success in many medical image analysis applications. However, their success is limited in problems that involve learning from imbalanced and heterogeneous datasets. Generative models parameterized using deep learning models can resolve this problem by characterizing the distribution of well-represented classes, a step enabling the identification of samples that were improbably generated from that distribution. This paper proposes a semi-supervised out-of-sample detection framework based on a 3D variational autoencoder-based generative adversarial network (VAE-GAN). The proposed framework relies on a high-level similarity metric and invariant representations learned by a semi-supervised discriminator to evaluate the generated images. The encoded latent representations were constrained according to user-defined properties through a jointly trained predictor network. Anomaly samples are detected using learned similarity scores and/or scores from an online one-class neural network. The high performance of the proposed methods is confirmed via a novel application to the automatic quality control of structural MR images.