Kooperativer Bibliotheksverbund

Berlin Brandenburg

and
and

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
Language
Year
  • 1
    Language: English
    In: IEEE Transactions on Circuits and Systems for Video Technology, March 2010, Vol.20(3), pp.407-416
    Description: Efficient bit stream adaptation and resilience to packet losses are two critical requirements in scalable video coding for transmission over packet-lossy networks. Various scalable layers have highly distinct importance, measured by their contribution to the overall video quality. This distinction is especially more significant in the scalable H.264/advanced video coding (AVC) video, due to the employed prediction hierarchy and the drift propagation when quality refinements are missing. Therefore, efficient bit stream adaptation and unequal protection of these layers are of special interest in the scalable H.264/AVC video. This paper proposes an algorithm to accurately estimate the overall distortion of decoder reconstructed frames due to enhancement layer truncation, drift/error propagation, and error concealment in the scalable H.264/AVC video. The method recursively computes the total decoder expected distortion at the picture-level for each layer in the prediction hierarchy. This ensures low computational cost since it bypasses highly complex pixel-level motion compensation operations. Simulation results show an accurate distortion estimation at various channel loss rates. The estimate is further integrated into a cross-layer optimization framework for optimized bit extraction and content-aware channel rate allocation. Experimental results demonstrate that precise distortion estimation enables our proposed transmission system to achieve a significantly higher average video peak signal-to-noise ratio compared to a conventional content independent system.
    Keywords: Error Correction Codes ; Robustness ; Streaming Media ; Automatic Voltage Control ; Video Coding ; Decoding ; Resilience ; Propagation Losses ; Protection ; Computational Efficiency ; Channel Coding ; Error Correction Coding ; Multimedia Communication ; Video Coding ; Video Signal Processing ; Engineering
    ISSN: 1051-8215
    E-ISSN: 1558-2205
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Language: English
    In: NeuroImage, 01 March 2011, Vol.55(1), pp.113-132
    Description: In this paper, we propose a novel symmetrical EEG/fMRI fusion method which combines EEG and fMRI by means of a common generative model. We use a total variation (TV) prior to model the spatial distribution of the cortical current responses and hemodynamic response functions, and utilize spatially adaptive temporal priors to model their temporal shapes. The spatial adaptivity of the prior model allows for adaptation to the local characteristics of the estimated responses and leads to high estimation performance for the cortical current distribution and the hemodynamic response functions. We utilize a Bayesian formulation with a variational Bayesian framework and obtain a fully automatic fusion algorithm. Simulations with synthetic data and experiments with real data from a multimodal study on face perception demonstrate the performance of the proposed method. ► Bayesian fusion of EEG and fMRI by means of a common generative model. ► Spatiotemporally adaptive prior modeling in a fully Bayesian context. ► Improved prior model enabling more accurate source localization.
    Keywords: Multimodal Fusion ; M/EEG Source Localization ; Spatial Adaptivity ; Total Variation ; Variational Bayes ; Medicine
    ISSN: 1053-8119
    E-ISSN: 1095-9572
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Language: English
    In: Angewandte Chemie International Edition, 20 August 2018, Vol.57(34), pp.10910-10914
    Description: Nonlinear unmixing of hyperspectral reflectance data is one of the key problems in quantitative imaging of painted works of art. The approach presented is to interrogate a hyperspectral image cube by first decomposing it into a set of reflectance curves representing pure basis pigments and second to estimate the scattering and absorption coefficients of each pigment in a given pixel to produce estimates of the component fractions. This two‐step algorithm uses a deep neural network to qualitatively identify the constituent pigments in any unknown spectrum and, based on the pigment(s) present and Kubelka–Munk theory to estimate the pigment concentration on a per‐pixel basis. Using hyperspectral data acquired on a set of mock‐up paintings and a well‐characterized illuminated folio from the 15th century, the performance of the proposed algorithm is demonstrated for pigment recognition and quantitative estimation of concentration. is decomposed into a set of reflectance curves representing pure basis pigments and scattering and absorption coefficients of each pigment are estimated to estimate component fractions. Using data acquired on a set of mock‐up paintings and a well‐characterized 15th century illuminated folio, the performance of the proposed algorithm is demonstrated for pigment recognition and quantitative estimation of concentration.
    Keywords: Deep Neural Network Classification ; Heritage Science ; Nonlinear Unmixing Kubelka–Munk Theory ; Visible Hyperspectral Imaging
    ISSN: 1433-7851
    E-ISSN: 1521-3773
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Language: English
    In: Signal Processing: Image Communication, April, 2013, Vol.28(4), p.323(11)
    Description: To link to full-text access for this article, visit this link: http://dx.doi.org/10.1016/j.image.2012.11.003 Byline: Xin Xin (a), Zhu Li (b), Aggelos K. Katsaggelos (a) Abstract: Visual query-by-capture applications call for a compact visual descriptor with minimum descriptor length. Preserving the visual identification performance while minimising the bit rate is a focus of the on-going MPEG7 CDVS (Compact Descriptors for Visual Search) standardisation effort. In this paper we tackle this problem by adopting Laplacian embedding for SIFT feature compression and employing topology verification based on a novel graph cut measure. In contrast to previous feature compression schemes, we approach the problem by finding a Laplacian embedding that preserves the nearest neighbour relations in feature space. Furthermore, we develop an efficient yet effective topology verification (TV) scheme to perform spatial consistency checking. In contrast to previous works on geometric verification, instead of enumerating all possible combinations of coordinate alignments of an image pair, this TV solution verifies possibly misaligned coordinate sets with a learning method which acquires a proper boundary between the topology representation of matched and non-matched image pairs. Furthermore, this TV solution is invariant to in-plane rotation, scaling and is quite resilient to a range of out-of-plane rotations. The proposed Laplacian embedding and Topological verification scheme are tested with the CDVS dataset and are found to be effective. Author Affiliation: (a) Department of EECS, Northwestern University, Evanston, IL, USA (b) Multimedia Standards Research, Samsung Telecom America, Richardson, TX, USA Article Note: (footnote) [star] This work is supported by a research grant from FutureWei Technology, USA.
    ISSN: 0923-5965
    Source: Cengage Learning, Inc.
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Language: English
    In: IEEE Transactions on Image Processing, March 2012, Vol.21(3), pp.1391-1405
    Description: In this paper, we are concerned with image downsampling using subpixel techniques to achieve superior sharpness for small liquid crystal displays (LCDs). Such a problem exists when a high-resolution image or video is to be displayed on low-resolution display terminals. Limited by the low-resolution display, we have to shrink the image. Signal-processing theory tells us that optimal decimation requires low-pass filtering with a suitable cutoff frequency, followed by downsampling. In doing so, we need to remove many useful image details causing blurring. Subpixel-based downsampling, taking advantage of the fact that each pixel on a color LCD is actually composed of individual red, green, and blue subpixel stripes, can provide apparent higher resolution. In this paper, we use frequency-domain analysis to explain what happens in subpixel-based downsampling and why it is possible to achieve a higher apparent resolution. According to our frequency-domain analysis and observation, the cutoff frequency of the low-pass filter for subpixel-based decimation can be effectively extended beyond the Nyquist frequency using a novel antialiasing filter. Applying the proposed filters to two existing subpixel downsampling schemes called direct subpixel-based downsampling (DSD) and diagonal DSD (DDSD), we obtain two improved schemes, i.e., DSD based on frequency-domain analysis (DSD-FA) and DDSD based on frequency-domain analysis (DDSD-FA). Experimental results verify that the proposed DSD-FA and DDSD-FA can provide superior results, compared with existing subpixel or pixel-based downsampling methods.
    Keywords: Rendering (Computer Graphics) ; Image Resolution ; Image Color Analysis ; Frequency Domain Analysis ; Image Edge Detection ; Color ; Humans ; Downsampling ; Frequency Analysis ; Subpixel Rendering ; Engineering ; Applied Sciences
    ISSN: 1057-7149
    E-ISSN: 1941-0042
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    Language: English
    In: IEEE Transactions on Circuits and Systems for Video Technology, October 2011, Vol.21(10), pp.1378-1389
    Description: In centralized transportation surveillance systems, video is captured and compressed at low processing power remote nodes and transmitted to a central location for processing. Such compression can reduce the accuracy of centrally run automated object tracking algorithms. In typical systems, the majority of communications bandwidth is spent on encoding temporal pixel variations such as acquisition noise or local changes to lighting. We propose a tracking-aware, H.264-compliant compression algorithm that removes temporal components of low tracking interest and optimizes the quantization of frequency coefficients, particularly those that most influence trackers, significantly reducing bitrate while maintaining comparable tracking accuracy. We utilize tracking accuracy as our compression criterion in lieu of mean squared error metrics. Our proposed system is designed with low processing power and memory requirements in mind, and as such can be deployed on remote nodes. Using H.264/AVC video coding and a commonly used state-of-the-art tracker we show that our algorithm allows for over 90% bitrate savings while maintaining comparable tracking accuracy.
    Keywords: Noise ; Bit Rate ; Streaming Media ; Accuracy ; Quantization ; Pixel ; Radar Tracking ; Quantization ; Urban Transportation Video ; Video Compression ; Video Object Tracking ; Video Processing ; Engineering
    ISSN: 1051-8215
    E-ISSN: 1558-2205
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    Language: English
    In: IEEE Transactions on Image Processing, April 2011, Vol.20(4), pp.984-999
    Description: In this paper, we address the super resolution (SR) problem from a set of degraded low resolution (LR) images to obtain a high resolution (HR) image. Accurate estimation of the sub-pixel motion between the LR images significantly affects the performance of the reconstructed HR image. In this paper, we propose novel super resolution methods where the HR image and the motion parameters are estimated simultaneously. Utilizing a Bayesian formulation, we model the unknown HR image, the acquisition process, the motion parameters and the unknown model parameters in a stochastic sense. Employing a variational Bayesian analysis, we develop two novel algorithms which jointly estimate the distributions of all unknowns. The proposed framework has the following advantages: 1) Through the incorporation of uncertainty of the estimates, the algorithms prevent the propagation of errors between the estimates of the various unknowns; 2) the algorithms are robust to errors in the estimation of the motion parameters; and 3) using a fully Bayesian formulation, the developed algorithms simultaneously estimate all algorithmic parameters along with the HR image and motion parameters, and therefore they are fully-automated and do not require parameter tuning. We also show that the proposed motion estimation method is a stochastic generalization of the classical Lucas-Kanade registration algorithm. Experimental results demonstrate that the proposed approaches are very effective and compare favorably to state-of-the-art SR algorithms.
    Keywords: Image Resolution ; Bayesian Methods ; Pixel ; Estimation ; Strontium ; Noise ; Robustness ; Bayesian Methods ; Parameter Estimation ; Super Resolution ; Total Variation ; Variational Methods ; Engineering ; Applied Sciences
    ISSN: 1057-7149
    E-ISSN: 1941-0042
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    Language: English
    In: IEEE Transactions on Image Processing, October 2013, Vol.22(10), pp.3994-4006
    Description: We propose a novel blind image deconvolution (BID) regularization framework for compressive sensing (CS) based imaging systems capturing blurred images. The proposed framework relies on a constrained optimization technique, which is solved by a sequence of unconstrained sub-problems, and allows the incorporation of existing CS reconstruction algorithms in compressive BID problems. As an example, a non-convex lp quasi-norm with 0 〈; p 〈; 1 is employed as a regularization term for the image, while a simultaneous auto-regressive regularization term is selected for the blur. Nevertheless, the proposed approach is very general and it can be easily adapted to other state-of-the-art BID schemes that utilize different, application specific, image/blur regularization terms. Experimental results, obtained with simulations using blurred synthetic images and real passive millimeter-wave images, show the feasibility of the proposed method and its advantages over existing approaches.
    Keywords: Inverse Methods ; Compressive Sensing ; Blind Image Deconvolution ; Constrained Optimization ; Engineering ; Applied Sciences
    ISSN: 1057-7149
    E-ISSN: 1941-0042
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 9
    Language: English
    In: IEEE Transactions on Signal Processing, August 2012, Vol.60(8), pp.3964-3977
    Description: Recovery of low-rank matrices has recently seen significant activity in many areas of science and engineering, motivated by recent theoretical results for exact reconstruction guarantees and interesting practical applications. In this paper, we present novel recovery algorithms for estimating low-rank matrices in matrix completion and robust principal component analysis based on sparse Bayesian learning (SBL) principles. Starting from a matrix factorization formulation and enforcing the low-rank constraint in the estimates as a sparsity constraint, we develop an approach that is very effective in determining the correct rank while providing high recovery performance. We provide connections with existing methods in other similar problems and empirical results and comparisons with current state-of-the-art methods that illustrate the effectiveness of this approach.
    Keywords: Bayesian Methods ; Sparse Matrices ; Principal Component Analysis ; Robustness ; Matrix Decomposition ; Estimation ; Mathematical Model ; Bayesian Methods ; Low-Rankness ; Matrix Completion ; Outlier Detection ; Robust Principal Component Analysis ; Sparse Bayesian Learning ; Sparsity ; Variational Bayesian Inference ; Engineering
    ISSN: 1053-587X
    E-ISSN: 1941-0476
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 10
    Language: English
    In: Journal of the Optical Society of America. A, Optics, image science, and vision, 01 January 2013, Vol.30(1), pp.102-11
    Description: This paper presents a computationally efficient method for the measurement of a dense image correspondence vector field using supplementary data from an inertial navigation sensor (INS). The application is suited to airborne imaging systems, such as an unmanned air vehicle, where size, weight, and power restrictions limit the amount of onboard processing available. The limited processing will typically exclude the use of traditional, but computationally expensive, optical flow and block matching algorithms, such as Lucas-Kanade, Horn-Schunck, or the adaptive rood pattern search. Alternatively, the measurements obtained from an INS, on board the platform, lead to a closed-form solution to the correspondence field. Airborne platforms are well suited to this application because they already possess INSs and global positioning systems as part of their existing avionics package. We derive the closed-form solution for the image correspondence vector field based on the INS data. We then show, through both simulations and real flight data, that the closed-form inertial sensor solution outperforms traditional optical flow and block matching methods.
    Keywords: Exact Solutions ; Images ; Inertial ; Matching ; Mathematical Analysis ; Mathematical Models ; Sensors ; Vectors (Mathematics) ; Lasers, Optics, and Electronics (So) ; Photonics (General) (Ea) ; Optics (Ah);
    ISSN: 10847529
    E-ISSN: 1520-8532
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages