A short story about GANs at Tooploox - Tooploox

A short introduction to generative models

In recent years, a type of Machine Learning model known as Generative Adversarial Networks (GAN), has become a hot topic. This is mainly due to their capability of generating good-looking and convincing artificially images. One example can be seen below, generated by the PG-GAN that was published in [1].

Source: https://github.com/tkarras/progressive_growing_of_gans

Do you recognize these celebrities? No? They are fake celebrities generated by a machine learning model!!

The idea of a GAN was initially proposed by Ian Goodfellow in [2]. It is based on Game Theory and involves training two competing neural networks: the generator network G and the discriminator network D. The goal of GANs is to train a generator G to sample from the data distribution by transforming the vector of noise z. The discriminator D is trained to distinguish samples generated by G from samples generated from the data distribution (i.e. images of celebrities in the above example). The architecture of the model is presented below.

Practically, the training process can be explained as a duel game, where:

  • Generator G(z) tries to fool the discriminator by generating real-looking images,
  • Discriminator D(x) tries to distinguish between real and fake images.

GAN models are widely used for generating artificial images, but there are plenty of other applications where they can be applied: semi-supervised classification, image retrieval style search and many others. It is also worth to mention, that the generated picture by GAN, created by students was recently sold for 432,500 on an auction [3]. We would like to share with you our most recent research [4], which uses GAN models to learn binary descriptors. <h2>Learning binary codes</h2> Compact binary representations of images are instrumental for a multitude of computer vision applications, including image retrieval, simultaneous localization and mapping (SLAM), and large-scale 3D reconstruction. Traditionally, for this purpose in Computer Vision were used image descriptors such as ORB, BRIEF, SIFT, SURF etc. [5,6]. The idea of descriptors is that firstly, they detect and next, they describe the local feature points and allow them to be matched across sequences of image frames. In addition, these features can be used as an input to SVM classifiers, which associates pictures with classes (e.g. images of dogs vs. cats).  The modern approach in this field aims at learning binary features directly by using deep neural networks (see the figure below).  [responsive imageid='3338' size1='0' size2='600' size3='1000']  Practically, we would like to create a neural network that returns vectors of binary features for given input images. It is highly desirable (especially, when the codes are used for image retrieval), to receive similar codes for similar images. For the considered example, the codes between images with cars are more similar than the codes generated for an image with a cat in.  It has been shown in [7,8] that GANs can be used for this purpose in the same manner as Convolutional Neural Networks (CNNs) for image recognition. Convolutional Neural Networks is the first Deep Learning approach, that has beaten the classical SVM classifiers [9] in the ImageNet competition [10]. <h2>BinGAN approach - one of the four papers in NIPS from Poland</h2> During our work at Tooploox we have created the BinGan model, a state-of-the-art model, which <strong>will be presented at Neural Information Processing System 2018</strong>, one of the largest and most important Machine Learning conferences in the world.  We proudly present our BinGAN model, which makes use of training properties of GANs to learn characteristic binary features for image retrieval and matching. The main idea is to take the discriminator of the trained GAN model, cut the classification layer and use the model to extract binary features. When the number of hidden units in the intermediate layers is large, the vector representations are better for representing images. This is because the network has a larger number of parameters to adjust and to better fit the data. However, this requires more memory consumption and more of computation.  In order to build lower-dimensional vector representations of images make the learning procedure more effective and to avoid overfitting of the network, we developed a special regularization method. In our approach, we introduce a combination of regularizers called the Distance Matching Regularizer (DMR) and the Binary Representation Regularizer (BRE). The regularizers are included in loss function, which is minimized during the training procedure. <h2>Distance Matching Regularizer (DMR)</h2> The DMR transfers Hamming distances from high-dimensional to low-dimensional layers. Roughly speaking, the information from the deeper layers is propagated into the shallow ones. The BRE increases the diversity of binary vectors.  To introduce the Distance Matching Regularizer, we will have to define a Hamming Distance. Assuming that we have two binary vectors, it is the number of position at which the corresponding values are different. For example, when we have vectors[1,1,1,1]and[-1,-1,1,1]the Hamming distance is2.  The goal of this regularizer is to include the properties of vector Hamming distances of the higher layer to the lower one, which has smaller dimensions of possible parameter values.  To describe it more precisely, we have to introduce two functions:  begin{align} centering sign(a)=frac{a}{|a|} end{align}  This functionsign(cdot)is applied to each element of the high-dimensional layer and results in binary codes containing-1or1for each element.  begin{align} centering softsign(a)=frac{a}{|a|+gamma} end{align}gammais the hyperparameter.  [responsive imageid='3347' size1='0' size2='600' size3='1000']  The soft sign is used in low-dim layers and provides a quantization technique. The result of it is continuous values from(-1,1). So assuming thatgammais0.001and the input is[1, -100, 0.001]the final vector can have a form[0.999, -0.99999, 0.5]. This function has a continuous form to provide gradient backpropagation, which is needed for training procedure and because the values are close to binary values({-1,1}), it is also possible to calculate the Hamming distance.  The notation we will use to explain the regularizers is listed here: <ul>  	<li>f(x)will be denoted as a low-dim layer withKhidden units,</li>  	<li>h(x)will be denoted as a high-dim layer withMunits,</li>  	<li>b_fis the result ofsign(cdot)function from units of layerf(x),</li>  	<li>b_his the result ofsign(cdot)function from units of layerh(x),</li>  	<li>s_fis the result ofsoftsign(cdot)function from units of layerf(x),</li>  	<li>s_his the result ofsoftsign(cdot)function from units of layerh(x).</li> </ul> This are the components used in the regularizer.  The explain more precisely the how the regularizer works, we have to look into the definitions. The Hamming distance between two binary vectors,b_1andb_2can be expressed using a dot product:Hamming(b_1,b_2) = −0.5 · (b_1^{T} b_2 – M). As a consequence, distant vectors are characterized by low-valued dot products and close vectors are characterized by high values.  Combining the Hamming distance definition withl (d_h, d_f) = 2 · |d_h – d_f|, which is the empirical expected value of the loss function used in DMR, we get:  begin{equation}label{eq:l_dmr} begin{aligned} L_{DMR} &= frac{1}{N(N-1)} sum_{k,j=1, k neq j}^N |frac{mathbf{b}_{h,k}^T mathbf{b}_{h,j}}{M} - frac{mathbf{s}_{f,k}^T mathbf{s}_{f,j}}{K}|, end{aligned} end{equation}  WhereNis the number of images in batch, so the loss is calculated between each image in the batch (k,j are the numbers of the images from mini-batches). The term in the equation above, consisting ofb_hvectors, is assumed to be constant during training time, so gradients are only computed to update the layers that produce thes_fcodes. Both terms are normalized by their number of elementsMandK(which are the vector dimensions).  It can be visualized in this way:  [responsive imageid='3337' size1='0' size2='600' size3='1000']  Where NiN is the Network-in-Network layer, nin1 is the lower-dim layer and nin2 the higher dim layer. The GPool in figure 4 is the average pooling layer. <h2>Adjusted Binarization Representation Entropy (BRE)</h2> The second regularizer we introduce is called the Adjusted Binarization Representation Entropy (BRE). This regularizer increases the diversity of binary vectors in the low-dimensional layer. The regularizer consists of two parts.  The first part of this regularizer calledL_{ME}calculates the average ofbar{s}values forKhidden units and forces the normalization of vector products. Thebar{s}is the average ofNelement batch, calculated from the softsign values. Thanks to this, it forces the vector to have an average of 0, which is important for the calculation of loss function.  begin{equation}label{eq:l_me} L_{ME} = frac{1}{K} sum_{k=1}^K (bar s_{f,k})^2, end{equation}  Our BRE regularizer differs from the original in the part that we callL_{MAC}, which is a weighted version of the originalL_{AC}(defined in [11]). Basically, values which have dot product different than zero are downweighted, and values that have dot products close to zero are upweighted. Z is the normalization value. TheL_{MAC}regularizer minimizes the correlation and hence increases diversity between image representations. This can also be seen as maximizing entropy.  begin{equation} L_{MAC} = sum_{k,j=1, k neq j}^N frac{alpha_{k,j}}{Z}frac{|mathbf{s}^T_{f,k} cdot mathbf{s}_{f,j}|}{K}, end{equation}  The final training loss has a form:  begin{equation} begin{aligned} L = L_D + lambda_{DMR} cdot L_{DMR} + lambda_{BRE} cdot (L_{ME} + L_{MAC}) end{aligned} label{eq:total_loss} end{equation}  TheL_{BRE}is defined as a sum ofL_{MAC}andL_{ME}and the lambdas are the hyperparameters of the model are set experimentally. <h2>Architecture</h2> For the image matching task the discriminator is composed of: <ul>  	<li>7 convolutional layers (3x3 kernels, 3 layers with 96 kernels and 4 layers with 128 kernels),</li>  	<li>Two network-in-network (NiN) [15] layers (with 256 and 128 units respectively)</li>  	<li>A is a discriminative layer.</li> </ul> For image retrieval the discriminator is composed of: <ul>  	<li>7 convolutional layers (3x3 kernels, 3 layers with 96 kernels and 4 layers with 192 kernels),</li>  	<li>Two NiN layers with 192 units,</li>  	<li>One fully-connected layer with three variants of (16, 32, 64 units) and a discriminative layer.</li> </ul> For the low-dimensional feature spaceb_fwe take fully-connected layer, and for the high-dimensional spaceb_hwe take average-pooled last NiN layer. <h2>Experiments</h2> <h3>Image retrieval</h3> In the task of image retrieval, we have a query image shown on the leftmost columns (red in the left figure and leftmost column in the right figure) and we are searching for the images that have closest Hamming distance to the query image in the binary descriptor space.  [responsive imageid='3349' size1='0' size2='600' size3='1000'] [responsive imageid='3350' size1='0' size2='600' size3='1000']  In this experiment, we use CIFAR-10 dataset to evaluate the quality of our approach in image retrieval. CIFAR-10 dataset has 10 categories and each of them is composed of 6,000 pictures with a resolution 32 × 32 color images. The whole dataset has 50,000 training and 10,000 testing images.  [responsive imageid='3516' size1='0' size2='600' size3='1000'] <em>Table 1. <span style="font-weight: 400;">Results on Cifar10 (mAP).</span></em> In table 1 we can see the performance of mean average precision of top 1000 returned images with respect to the different number of hash bits on the CIFAR-10 dataset. Our method outperforms DBD-MQ method, which is the unsupervised method that previously was reporting state-of-the-art results on this dataset, for 16, 32 and 64 bits. The performance improvement in terms of Mean Average Precision reaches over 40%, 31%, and 15%, respectively. The most significant performance boost can be observed for the shortest binary strings. Thanks to the loss terms introduced in our method, we explicitly model the distribution of the information in a low-dimensional binary space. <h3>Image matching</h3> The BinGAN as mentioned before can be used as an image descriptor. It means that the similar patches should have a similar vector representation.  [responsive imageid='3518' size1='0' size2='600' size3='1000'] <em>Table 2. <span style="font-weight: 400;"> Results on Brown dataset (FPR@95%). </span></em> To evaluate the performance of our approach on image matching task, we use the Brown dataset [3]. We train binary local feature descriptors using our BinGAN method and compare with competing previous methods.  The Brown dataset is composed of three subsets of patches: Yosemite, Liberty and Notredame. The resolution of the patches is 64 × 64, although we subsample them to 32 × 32 to increase the processing efficiency. Next, we use the method to create binary descriptors. The data is split into training and test sets according to the provided ground truth, with 50,000 training pairs (25,000 matched and 25,000 non-matched pairs) and respectively 10,000 test pairs (5,000 matched, and 5,000 non-matched pairs).  In table 2. we present false positive rates at 95% true positives (FPR@95%) obtained for our BinGAN descriptor compared with the state-of-the-art binary descriptors on Brown dataset (%). As we can see, it has the lowest errors in most cases.  See more in: <a href="https://arxiv.org/pdf/1806.06778.pdf">https://arxiv.org/pdf/1806.06778.pdf</a> The code for our method is available: <a href="https://github.com/maciejzieba/binGAN">github.com/maciejzieba/binGAN</a> <h3>What's next?</h3> Currently, there are two research branches in Tooploox. 1. First one includes using generating point clouds using GANs. 2. Next, we are thinking about using BinGAN for style search approach that is already developed at Tooploox.  <strong>Authors:</strong>  Piotr Semberecki  Maciej Zięba  <em>Literature:</em>  [1] Karras, Tero, et al. "Progressive growing of gans for improved quality, stability, and variation." ICLR, 2017. [2] Goodfellow, Ian, et al. "Generative adversarial nets." NIPS, 2014. [3]. Christie's sells its first AI portrait for432,500, beating estimates of $10,000 https://www.vox.com/the-goods/2018/10/29/18038946/art-algorithm
[4] Zięba et al. “BinGAN: Learning Compact Binary Descriptors with a Regularized GAN” NIPS, 2018.
[5] Lowe, David G. “Distinctive image features from scale-invariant keypoints.” International journal of computer vision 60.2 (2004): 91-110.
[6]. Rublee, Ethan, et al. “ORB: An efficient alternative to SIFT or SURF.” Computer Vision (ICCV), 2011 IEEE international conference on. IEEE, 2011.
[7] A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. In ICLR, 2016.
[8] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. In NIPS, 2016.
[9]. Lin, Yuanqing, et al. “Imagenet classification: fast descriptor coding and large-scale SVM training.” Large scale visual recognition challenge (2010).
[10]. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. “Imagenet classification with deep convolutional neural networks.” Advances in neural information processing systems. 2012.
[11] Y. Cao, G. W. Ding, K. Y.-C. Lui, and R. Huang. Improving GAN training via binarized representation entropy (BRE) regularization. In ICLR, 2018.

Read also about Augmenting AI image recognition with partial evidence

Join our newsletter

By clicking "Submit" you will subscribe to our newsletter. See our Privacy Policy for more information on how Tooploox uses your personal data and what your rights are.

Similar posts

Tooploox CS and AI news #12

Konrad Budek

Nov 10, 2021 - 

4 min read - 
Tooploox CS and AI News #14

Konrad Budek

Oct 11, 2021 - 

4 min read - 

Let’s work together

Tooploox Sp. z o.o.


EU: + 48 733 888 088

U.S.: + 1 415 800 2835

Business Partnerships


Marketing & PR




Office Management


Wrocław (HQ)

ul. Tęczowa 7
53-601 Wrocław
See on the map


ul. Foksal 18
00-372 Warszawa
See on the map

We use cookies for analytics and to improve our site - more info in our privacy policy