Kooperativer Bibliotheksverbund

Berlin Brandenburg

and
and

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    Language: English
    In: IEEE Transactions on Knowledge and Data Engineering, 01 November 2016, Vol.28(11), pp.2884-2894
    Description: In this study, we propose a novel vector quantization algorithm for Approximate Nearest Neighbor (ANN) search, based on a joint competitive learning strategy and hence called as competitive quantization (CompQ). CompQ is a hierarchical algorithm, which iteratively minimizes the quantization error by jointly optimizing the codebooks in each layer, using a gradient decent approach. An extensive set of experimental results and comparative evaluations show that CompQ outperforms the-state-of-the-art while retaining a comparable computational complexity.
    Keywords: Vector Quantization ; Encoding ; Optimization ; Hamming Distance ; Electronic Mail ; Nearest Neighbor Searches ; Approximate Nearest Neighbor Search ; Binary Codes ; Large-Scale Retrieval ; Vector Quantization ; Engineering ; Computer Science
    ISSN: 1041-4347
    E-ISSN: 1558-2191
    Source: IEEE Conference Publications
    Source: IEEE Journals & Magazines 
    Source: IEEE Xplore
    Source: IEEE Journals & Magazines
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Language: English
    In: IEEE Transactions on Knowledge and Data Engineering, 01 July 2016, Vol.28(7), pp.1722-1733
    Description: Approximate Nearest Neighbor (ANN) search has become a popular approach for performing fast and efficient retrieval on very large-scale datasets in recent years, as the size and dimension of data grow continuously. In this paper, we propose a novel vector quantization method for ANN search which enables faster and more accurate retrieval on publicly available datasets. We define vector quantization as a multiple affine subspace learning problem and explore the quantization centroids on multiple affine subspaces. We propose an iterative approach to minimize the quantization error in order to create a novel quantization scheme, which outperforms the state-of-the-art algorithms. The computational cost of our method is also comparable to that of the competing methods.
    Keywords: Principal Component Analysis ; Artificial Neural Networks ; Vector Quantization ; Iterative Methods ; Encoding ; Nearest Neighbor Searches ; Approximate Nearest Neighbor Search ; Binary Codes ; Large-Scale Retrieval ; Subspace Clustering ; Vector Quantization ; Engineering ; Computer Science
    ISSN: 1041-4347
    E-ISSN: 1558-2191
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Language: English
    In: 2018 25th IEEE International Conference on Image Processing (ICIP), October 2018, pp.311-315
    Description: The massive size of data that needs to be processed by Machine Learning models nowadays sets new challenges related to their computational complexity and memory footprint. These challenges span all processing steps involved in the application of the related models, i.e., from the fundamental processing steps needed to evaluate distances of vectors, to the optimization of large-scale systems, e.g. for non-linear regression using kernels, or the speed up of deep learning models formed by billions of parameters. In order to address these challenges, new approximate solutions have been recently proposed based on matrix/tensor decompositions, randomization and quantization strategies. This paper provides a comprehensive review of the related methodologies and discusses their connections.
    Keywords: Kernel ; Matrix Decomposition ; Training ; Computational Modeling ; Quantization (Signal) ; Eigenvalues and Eigenfunctions ; Artificial Neural Networks ; Approximate Nearest Neighbor Search ; Vector Quantization ; Hashing ; Approximate Kernel-Based Learning ; Low-Rank Approximation ; Neural Network Acceleration ; Applied Sciences
    ISSN: 15224880
    E-ISSN: 2381-8549
    Source: IEEE Conference Publications
    Source: IEEE Xplore
    Source: IEEE Journals & Magazines 
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages