Apprentissage de Représentations Discriminatives par Méthodes à Noyau

par Rafael Sampaio De Rezende

Projet de thèse en Informatique

Sous la direction de Jean Ponce et de Francis Bach.

Thèses en préparation à Paris Sciences et Lettres , dans le cadre de École doctorale de Sciences mathématiques de Paris Centre (Paris) , en partenariat avec LIENS - Laboratoire d'informatique de l'École normale supérieure (laboratoire) et de École normale supérieure (Paris ; 1985-....) (établissement de préparation de la thèse) depuis le 18-09-2013 .

  • Titre traduit

    Learning Discriminative Representations via Kernel Methods

  • Résumé

    This document gives a brief outline of my thesis, entitled ”Learning Discriminative Representations via Kernel Methods.” 1 Contextual Fisher Vectors We study the dependency of a Fisher vector representation on the Gaussian mixture model used as its codewords. Indeed, a classical result used to justify the normalization of Fisher vectors for image classification tasks says the normalized Fisher vectors are independent of its background information. However this is only true if we assume its background information is modeled by the codewords probability function. In this chapter, we introduce the use of multiple Gaussian mixture models for different backgrounds, represented by different scene categories. We analyse the improvement of different probability functions on the performance of these representations for object classification and the impact of scene category as a latent variable. 2 Square Loss Exemplar Machines for Image Retrieval This chapter proposes an extension to the exemplar SVM feature encoding pipeline first proposed by Zepeda and et. We first show that, by replacing the hinge loss by the square loss in the ESVM cost function, similar results in image retrieval can be obtained at a fraction of the computational cost. We call this model square loss exemplar machine, or SLEM. Secondly, we introduce a kernelized SLEM variant which benefits from the same computational advantages but displays improved performance. We present experiments that establish the performance and computational advantages of our methods using a large array of base feature representations and standard image retrieval datasets. 3 Square Loss Exemplar Machine as Matching Function In this chapter, we propose using SLEM for dense semantic matching problem. We extend the work of Bristow et al., that proposed the use of highly discriminative matching function to better simulate optical flow for semantically similar images. We further study the limits and advantages of using linear functions and kernel matching functions in diverse matching datasets.