feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
  • 1
    UID:
    gbv_1028034571
    Format: Online-Ressource (VIII, 538 p. 147 illus., 109 illus. in color, online resource)
    Edition: Springer eBook Collection. Mathematics and Statistics
    ISBN: 9783319182841
    Series Statement: Springer Handbooks of Computational Statistics
    Content: Addressing a broad range of big data analytics in cross-disciplinary applications, this essential handbook focuses on the statistical prospects offered by recent developments in this field. To do so, it covers statistical methods for high-dimensional problems, algorithmic designs, computation tools, analysis flows and the software-hardware co-designs that are needed to support insightful discoveries from big data. The book is primarily intended for statisticians, computer experts, engineers and application developers interested in using big data analytics with statistics. Readers should have a solid background in statistics and computer science
    Content: Preface -- Statistics, Statisticians, and the Internet of Things (John M. Jordan and Dennis K. J. Lin) -- Cognitive Data Analysis for Big Data (Jing Shyr, Jane Chu and Mike Woods) -- Statistical Leveraging Methods in Big Data (Xinlian Zhang, Rui Xie and Ping Ma) -- Scattered Data and Aggregated Inference (Xiaoming Huo, Cheng Huang and Xuelei Sherry Ni) -- Nonparametric Methods for Big Data Analytics (Hao Helen Zhang) -- Finding Patterns in Time Series (James E. Gentle and Seunghye J. Wilson) -- Variational Bayes for Hierarchical Mixture Models (Muting Wan, James G. Booth and Martin T. Wells) -- Hypothesis Testing for High-Dimensional Data (Wei Biao Wu, Zhipeng Lou and Yuefeng Han) -- High-Dimensional Classification (Hui Zou) -- Analysis of High-Dimensional Regression Models Using Orthogonal Greedy Algorithms (Hsiang-Ling Hsu, Ching-Kang Ing and Tze Leung Lai) -- Semi-Supervised Smoothing for Large Data Problems (Mark Vere Culp, Kenneth Joseph Ryan and George Michailidis) -- Inverse Modeling: A Strategy to Cope with Non-Linearity (Qian Lin, Yang Li and Jun S. Liu) -- Sufficient Dimension Reduction for Tensor Data (Yiwen Liu, Xin Xing and Wenxuan Zhong) -- Compressive Sensing and Sparse Coding (Kevin Chen and H. T. Kung) -- Bridging Density Functional Theory and Big Data Analytics with Applications (Chien-Chang Chen, Hung-Hui Juan, Meng-Yuan Tsai and Henry Horng-Shing Lu) -- Q3-D3-LSA: D3.js and generalized vector space models for Statistical Computing (Lukas Borke and Wolfgang Karl Härdle) -- A Tutorial on Libra: R Package for the Linearized Bregman Algorithm in High-Dimensional Statistics (Jiechao Xiong, Feng Ruan and Yuan Yao) -- Functional Data Analysis for Big Data: A Case Study on California Temperature Trends (Pantelis Zenon Hadjipantelis and Hans-Georg Müller) -- Bayesian Spatiotemporal Modeling for Detecting Neuronal Activation via Functional Magnetic Resonance Imaging (Martin Bezener, Lynn E. Eberly, John Hughes, Galin Jones and Donald R. Musgrove) -- Construction of Tight Frames on Graphs and Application to Denoising (Franziska Göbel, Gilles Blanchard and Ulrike von Luxburg) -- Beta-Boosted Ensemble for Big Credit Scoring Data (Maciej Zięba and Wolfgang Karl Härdle) --
    Additional Edition: ISBN 9783319182834
    Additional Edition: Erscheint auch als Druck-Ausgabe Handbook of big data analytics Cham, Switzerland : Springer, 2018 ISBN 9783319182834
    Additional Edition: ISBN 3319182838
    Additional Edition: Printed edition ISBN 9783319182834
    Language: English
    Subjects: Computer Science , Mathematics
    RVK:
    RVK:
    Keywords: Statistik ; Data Science ; Big Data ; Lehrbuch
    URL: Cover
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    UID:
    edochu_18452_22245
    Format: 1 Online-Ressource (57 Seiten)
    Content: Simulierte Hedge Missspezifikation zu Risikomanagementzwecken von Cryptocurrencies.
    Content: The market for cryptocurrencies is a very dynamic market with highly volatile movements and discontinuities from large jumps. We investigate the risk-management perspective when selling securities written on cryptocurrencies. To this day, options written on cryptocurrencies are not officially exchange-traded. This study mimics the dynamics of cryptocurrency markets in a simulation study. We assume that the asset follows the stochastic volatility with correlated jumps model as presented in Duffie et al. ( 2000 ) and price options with parameters calibrated on the CRIX, a cryptocurrency index that serves as a representative of market movements. We investigate on risk- management opportunities of hedging options written on cryptocurrencies and evaluate the hedge performance under model misspecification. The hedge models are misspecified in the manner that they include fewer sources of randomness than the nother the ment the ment the industry-standard Black-Scholes option pricing model, the Heston Stochastic volatility model, and the Merton jump-diffusion model. We present different hedging strategies and perform an empirical study on delta-hedging. We report poor hedging results when calibration is poor. The results show good performances of the Black-Scholes and the Heston model and outline the poor hedging performance of the Merton model. Lastly, we observe large unhedgeable losses in the left tail. These losses potentially result from large jumps.
    Note: Masterarbeit Humboldt-Universität zu Berlin 2019
    Language: English
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    UID:
    edochu_18452_29339
    Format: 1 Online-Ressource (31 Seiten)
    Content: The presence of nonignorable missing response variables often leads to complex conditional distribution patterns that cannot be effectively captured through mean regression. In contrast, quantile regression offers valuable insights into the conditional distribution. Consequently, this article places emphasis on the quantile regression approach to address nonrandom missing data. Taking inspiration from fractional imputation, this paper proposes a novel smoothed quantile regression estimation equation based on a sampling importance resampling (SIR) algorithm instead of nonparametric kernel regression methods. Additionally, we present an augmented inverse probability weighting (AIPW) smoothed quantile regression estimation equation to reduce the influence of potential misspecification in a working model. The consistency and asymptotic normality of the empirical likelihood estimators corresponding to the above estimating equations are proven under the assumption of a correctly specified parameter working model. Furthermore, we demonstrate that the AIPW estimation equation converges to an IPW estimation equation when a parameter working model is misspecified, thus illustrating the robustness of the AIPW estimation approach. Through numerical simulations, we examine the finite sample properties of the proposed method when the working models are both correctly specified and misspecified. Furthermore, we apply the proposed method to analyze HIV—CD4 data, thereby exploring variations in treatment effects and the influence of other covariates across different quantiles.
    Content: Peer Reviewed
    In: Basel : MDPI, 11,24
    Language: English
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    UID:
    edochu_18452_14670
    Format: 1 Online-Ressource (41 Seiten)
    Content: This study gives an outline of modern theory of classification and regression trees (CART) and shows the advantages of CART applications in finance. Practical issues regarding CART applications and core implementation are presented. The second part of the work is mainly concentrated on DAX30 market simulation results and shows how a CART-based business application can perform on stock market as well as what supplementary results can be got using CART as a forecasting system. In this realm comparison of technical and fundamental approaches is performed. Finally, information ageing effect in the context of learning sample construction is analyzed.
    Note: Masterarbeit Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät 2005
    Language: English
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    UID:
    edochu_18452_14680
    Format: 1 Online-Ressource (85 Seiten)
    Content: Fundamental progress has been made in developing more realistic option pricing models. While the hedging performance of these models has been investigated for plain vanilla options, it is still unknown how well these generalizations improve the hedging of exotic options. Using different barrier options on the DAX, we examine a stochastic volatility, a jump diffusion and a mixed model. We consider delta hedging, vega hedging and delta hedging with minimum variance in the Heston, the Bates and the Merton model. Thus, this work deals with the question of model selection that is nowadays of great importance because of the growing number of models and exotic products.
    Note: Masterarbeit Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät 2005
    Language: English
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    UID:
    edochu_18452_14696
    Format: 1 Online-Ressource (88 Seiten)
    Content: In today’s net-based technology culture, a new way of learning/teaching, accessing or preparing learning materials is finding root in Education. It is enabling individuals and institutions to play a more powerful and attractive role in the education process, referred to as e-learning. E-learning is internet-enabled learning that uses net work technology to design, deliver, select, administer and extend learning. Though there are observable increase for multimedia education in various subject fields, the e-learning/e-teaching of statistics and its application to some subject areas is lacking. Statistics as the science of obtaining, processing, and interpreting information from highly complex structural data is often difficult for learners. It requires skills in handling quantitative, graphical as well as mathematical ability. Developing e-learning/e-teaching tool to address these concerns is a powerful and attractive way in which statistics and its application will benefit from e-learning. This thesis is based on such a tool, the MD*Book. It constitute two sections, - Chapters 1 to 4: Introduces e-learning/e-teaching concept, the MD*Book tool and the XploRe Quantlet Client (XQC) techniques. I exemplify the application of MD*Book: e-learning/e-teaching with an implementation to a Finance Introductory Course, FIC MD*Booklet. The implementation herein is a reflection of one of the several enhanced examples on statistical methods and analysis in course content using the XploRe quantlet technology. - Chapters 5 to 6: Explores criteria for e-learning evaluation, investigates e-learning/e-teaching with FIC MD*Booklet through an evaluation study on its learning innovation and outline some suggestions and remarks on the findings. Chapter 7 presents a conclusion on the project in this thesis.
    Note: Masterarbeit Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät 2004
    Language: English
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    Online Resource
    Online Resource
    Berlin : Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät
    UID:
    edochu_18452_14353
    Content: The computer has always enabled statisticians to carry out their tasks effectively and through subsequent developments in digital technology, with increasing efficiency. Technology has also enabled statisticians to identify new areas of activity and to discover and define new tasks or areas of research. In other words, the computer has been the driving force for statisticians to go into new, untapped areas. This book showcases selected items from the C.A.S.E. computer museum, at Humboldt-Universität zu Berlin. It outlines the development of manually processed and edited data from the days of punched cards to today's state-of-the-art where data is generated and stored every time we visit a webpage or buy food at a grocery store.
    Content: Peer Reviewed
    Note: http://computermuseum.wiwi.hu-berlin.de/cmb/
    Language: German
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    UID:
    edochu_18452_14707
    Format: 1 Online-Ressource (98 Seiten)
    Content: An important empirical fact in financial market is that return distributions are often skewed and heavy-tailed. This paper employs maximum likelihood estimation to estimate the five parameters of generalized hyperbolic distribution, a highly flexible heavy-tailed distribution. The estimation utilizes Powell’s methods in multidimensions and the performance of estimation is measured by simulation studies. Application to the financial market provides us with estimates of return distribution of some financial assets.
    Note: Masterarbeit Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät 2005
    Language: English
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 9
    UID:
    edochu_18452_14688
    Format: 1 Online-Ressource (77 Seiten)
    Note: Masterarbeit Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät 2005
    Language: English
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 10
    UID:
    edochu_18452_14689
    Format: 1 Online-Ressource (100 Seiten)
    Content: Risk management and the thorough understanding of the relations between financial markets and the standard theory of macroeconomics have always been among the topics most addressed by researchers, both financial mathematicians and economists. This work aims at explaining investors’ behavior from a macroeconomic aspect (modeled by the investors’ pricing kernel and their relative risk aversion) using stocks and options data. Daily estimates of investors’ pricing kernel (PK) and relative risk aversion (RRA) are obtained and used to construct and analyze a three-year long time-series. The first four moments of these time-series as well as their values at the money are the starting point of a principal component analysis. The relation between changes in a major index level and implied volatility at the money and between the principal components of the changes in RRA is found to be linear. The relation of the same explanatory variables to the principal components of the changes in PK is found to be log-linear, although this relation is not significant for all of the examined maturities.
    Note: Masterarbeit Humboldt-Universität zu Berlin, Wirtschaftswissenschaftliche Fakultät 2005
    Language: English
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages