Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
Type of Medium
Language
Region
Library
Years
Person/Organisation
Keywords
  • 1
    Online Resource
    Online Resource
    Amsterdam :Academic Press is an imprint of Elsevier,
    UID:
    almahu_9948327948602882
    Format: 1 online resource (xii, 759 pages) : , illustrations
    Edition: Second edition.
    ISBN: 9780124059160 (e-book)
    Additional Edition: Print version: Kruschke, John K. Doing Bayesian data analysis : a tutorial with R, JAGS, and Stan. Amsterdam : Academic Press is an imprint of Elsevier, c2015 ISBN 9780124058880
    Language: English
    Keywords: Electronic books.
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    Amsterdam :Academic Press is an imprint of Elsevier,
    UID:
    edocfu_9958117564202883
    Format: 1 online resource (xii, 759 pages ) , illustrations
    Edition: 2nd ed.
    ISBN: 0-12-405916-3 , 0-12-405888-4
    Note: Front Cover -- Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan -- Copyright -- Dedication -- Contents -- Chapter 1: What's in This Book (Read This First!) -- 1.1 Real People Can Read This Book -- 1.1.1 Prerequisites -- 1.2 What's in This Book -- 1.2.1 You're busy. What's the least you can read? -- 1.2.2 You're really busy! Isn't there even less you can read? -- 1.2.3 You want to enjoy the view a little longer. But not too much longer -- 1.2.4 If you just gotta reject a null hypothesis… -- 1.2.5 Where's the equivalent of traditional test X in this book? -- 1.3 What's New in the Second Edition? -- 1.4 Gimme Feedback (Be Polite) -- 1.5 Thank You! -- Part I: The Basics: Models, Probability, Bayes' Rule, and R -- Chapter 2: Introduction: Credibility, Models, and Parameters -- 2.1 Bayesian Inference Is Reallocation of CredibilityAcross Possibilities -- 2.1.1 Data are noisy and inferences are probabilistic -- 2.2 Possibilities Are Parameter Values in Descriptive Models -- 2.3 The Steps of Bayesian Data Analysis -- 2.3.1 Data analysis without parametric models? -- 2.4 Exercises -- Chapter 3: The R Programming Language -- 3.1 Get the Software -- 3.1.1 A look at RStudio -- 3.2 A Simple Example of R in Action -- 3.2.1 Get the programs used with this book -- 3.3 Basic Commands and Operators in R -- 3.3.1 Getting help in R -- 3.3.2 Arithmetic and logical operators -- 3.3.3 Assignment, relational operators, and tests of equality -- 3.4 Variable Types -- 3.4.1 Vector -- 3.4.1.1 The combine function -- 3.4.1.2 Component-by-component vector operations -- 3.4.1.3 The colon operator and sequence function -- 3.4.1.4 The replicate function -- 3.4.1.5 Getting at elements of a vector -- 3.4.2 Factor -- 3.4.3 Matrix and array -- 3.4.4 List and data frame -- 3.5 Loading and Saving Data -- 3.5.1 The read.csv and read.table functions. , 3.5.2 Saving data from R -- 3.6 Some Utility Functions -- 3.7 Programming in R -- 3.7.1 Variable names in R -- 3.7.2 Running a program -- 3.7.3 Programming a function -- 3.7.4 Conditions and loops -- 3.7.5 Measuring processing time -- 3.7.6 Debugging -- 3.8 Graphical Plots: Opening and Saving -- 3.9 Conclusion -- 3.10 Exercises -- Chapter 4: What Is This Stuff Called Probability? -- 4.1 The Set of All Possible Events -- 4.1.1 Coin flips: Why you should care -- 4.2 Probability: Outside or Inside the Head -- 4.2.1 Outside the head: Long-run relative frequency -- 4.2.1.1 Simulating a long-run relative frequency -- 4.2.1.2 Deriving a long-run relative frequency -- 4.2.2 Inside the head: Subjective belief -- 4.2.2.1 Calibrating a subjective belief by preferences -- 4.2.2.2 Describing a subjective belief mathematically -- 4.2.3 Probabilities assign numbers to possibilities -- 4.3 Probability Distributions -- 4.3.1 Discrete distributions: Probability mass -- 4.3.2 Continuous distributions: Rendezvous with density -- 4.3.2.1 Properties of probability density functions -- 4.3.2.2 The normal probability density function -- 4.3.3 Mean and variance of a distribution -- 4.3.3.1 Mean as minimized variance -- 4.3.4 Highest density interval (HDI) -- 4.4 Two-Way Distributions -- 4.4.1 Conditional probability -- 4.4.2 Independence of attributes -- 4.5 Appendix: R Code for Figure 4.1 -- 4.6 Exercises -- Chapter 5: Bayes' Rule -- 5.1 Bayes' Rule -- 5.1.1 Derived from definitions of conditional probability -- 5.1.2 Bayes' rule intuited from a two-way discrete table -- 5.2 Applied to Parameters and Data -- 5.2.1 Data-order invariance -- 5.3 Complete Examples: Estimating Bias in a Coin -- 5.3.1 Influence of sample size on the posterior -- 5.3.2 Influence of the prior on the posterior -- 5.4 Why Bayesian Inference Can Be Difficult. , 5.5 Appendix: R Code for Figures 5.1, 5.2, etc. -- 5.6 Exercises -- Part II: All the Fundamentals Applied to Inferring a Binomial Probability -- Chapter 6: Inferring a Binomial Probability via Exact Mathematical Analysis -- 6.1 The Likelihood Function: Bernoulli Distribution -- 6.2 A Description of Credibilities: The Beta Distribution -- 6.2.1 Specifying a beta prior -- 6.3 The Posterior Beta -- 6.3.1 Posterior is compromise of prior and likelihood -- 6.4 Examples -- 6.4.1 Prior knowledge expressed as a beta distribution -- 6.4.2 Prior knowledge that cannot be expressed as a beta distribution -- 6.5 Summary -- 6.6 Appendix: R Code for Figure 6.4 -- 6.7 Exercises -- Chapter 7: Markov Chain Monte Carlo -- 7.1 Approximating a Distribution with a Large Sample -- 7.2 A Simple Case of the Metropolis Algorithm -- 7.2.1 A politician stumbles upon the Metropolis algorithm -- 7.2.2 A random walk -- 7.2.3 General properties of a random walk -- 7.2.4 Why we care -- 7.2.5 Why it works -- 7.3 The Metropolis Algorithm More Generally -- 7.3.1 Metropolis algorithm applied to Bernoulli likelihood and beta prior -- 7.3.2 Summary of Metropolis algorithm -- 7.4 Toward Gibbs Sampling: Estimating Two Coin Biases -- 7.4.1 Prior, likelihood and posterior for two biases -- 7.4.2 The posterior via exact formal analysis -- 7.4.3 The posterior via the Metropolis algorithm -- 7.4.4 Gibbs sampling -- 7.4.5 Is there a difference between biases? -- 7.4.6 Terminology: MCMC -- 7.5 MCMC Representativeness, Accuracy, and Efficiency -- 7.5.1 MCMC representativeness -- 7.5.2 MCMC accuracy -- 7.5.3 MCMC efficiency -- 7.6 Summary -- 7.7 Exercises -- Chapter 8: JAGS -- 8.1 JAGS and its Relation to R -- 8.2 A Complete Example -- 8.2.1 Load data -- 8.2.2 Specify model -- 8.2.3 Initialize chains -- 8.2.4 Generate chains -- 8.2.5 Examine chains -- 8.2.5.1 The plotPost function. , 8.3 Simplified Scripts for Frequently Used Analyses -- 8.4 Example: Difference of Biases -- 8.5 Sampling from the Prior Distribution in JAGS -- 8.6 Probability Distributions Available in JAGS -- 8.6.1 Defining new likelihood functions -- 8.7 Faster Sampling with Parallel Processing in RunJAGS -- 8.8 Tips for Expanding JAGS Models -- 8.9 Exercises -- Chapter 9: Hierarchical Models -- 9.1 A Single Coin from a Single Mint -- 9.1.1 Posterior via grid approximation -- 9.2 Multiple Coins from a Single Mint -- 9.2.1 Posterior via grid approximation -- 9.2.2 A realistic model with MCMC -- 9.2.3 Doing it with JAGS -- 9.2.4 Example: Therapeutic touch -- 9.3 Shrinkage in Hierarchical Models -- 9.4 Speeding up JAGS -- 9.5 Extending the Hierarchy: Subjects Within Categories -- 9.5.1 Example: Baseball batting abilities by position -- 9.6 Exercises -- Chapter 10: Model Comparison and Hierarchical Modeling -- 10.1 General Formula and the Bayes Factor -- 10.2 Example: Two Factories of Coins -- 10.2.1 Solution by formal analysis -- 10.2.2 Solution by grid approximation -- 10.3 Solution by MCMC -- 10.3.1 Nonhierarchical MCMC computation of each model'smarginal likelihood -- 10.3.1.1 Implementation with JAGS -- 10.3.2 Hierarchical MCMC computation of relative model probability -- 10.3.2.1 Using pseudo-priors to reduce autocorrelation -- 10.3.3 Models with different "noise" distributions in JAGS -- 10.4 Prediction: Model Averaging -- 10.5 Model Complexity Naturally Accounted for -- 10.5.1 Caveats regarding nested model comparison -- 10.6 Extreme Sensitivity to Prior Distribution -- 10.6.1 Priors of different models should be equally informed -- 10.7 Exercises -- Chapter 11: Null Hypothesis Significance Testing -- 11.1 Paved with Good Intentions -- 11.1.1 Definition of p value -- 11.1.2 With intention to fix N -- 11.1.3 With intention to fix z. , 11.1.4 With intention to fix duration -- 11.1.5 With intention to make multiple tests -- 11.1.6 Soul searching -- 11.1.7 Bayesian analysis -- 11.2 Prior Knowledge -- 11.2.1 NHST analysis -- 11.2.2 Bayesian analysis -- 11.2.2.1 Priors are overt and relevant -- 11.3 Confidence Interval and Highest Density Interval -- 11.3.1 CI depends on intention -- 11.3.1.1 CI is not a distribution -- 11.3.2 Bayesian HDI -- 11.4 Multiple Comparisons -- 11.4.1 NHST correction for experimentwise error -- 11.4.2 Just one Bayesian posterior no matter how you look at it -- 11.4.3 How Bayesian analysis mitigates false alarms -- 11.5 What a Sampling Distribution Is Good For -- 11.5.1 Planning an experiment -- 11.5.2 Exploring model predictions (posterior predictive check) -- 11.6 Exercises -- Chapter 12: Bayesian Approaches to Testing a Point ("Null") Hypothesis -- 12.1 The Estimation Approach -- 12.1.1 Region of practical equivalence -- 12.1.2 Some examples -- 12.1.2.1 Differences of correlated parameters -- 12.1.2.2 Why HDI and not equal-tailed interval? -- 12.2 The Model-Comparison Approach -- 12.2.1 Is a coin fair or not? -- 12.2.1.1 Bayes' factor can accept null with poor precision -- 12.2.2 Are different groups equal or not? -- 12.2.2.1 Model specification in JAGS -- 12.3 Relations of Parameter Estimation and Model Comparison -- 12.4 Estimation or Model Comparison? -- 12.5 Exercises -- Chapter 13: Goals, Power, and Sample Size -- 13.1 The Will to Power -- 13.1.1 Goals and obstacles -- 13.1.2 Power -- 13.1.3 Sample size -- 13.1.4 Other expressions of goals -- 13.2 Computing Power and Sample Size -- 13.2.1 When the goal is to exclude a null value -- 13.2.2 Formal solution and implementation in R -- 13.2.3 When the goal is precision -- 13.2.4 Monte Carlo approximation of power -- 13.2.5 Power from idealized or actual data. , 13.3 Sequential Testing and the Goal of Precision.
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    Amsterdam :Academic Press is an imprint of Elsevier,
    UID:
    edoccha_9958117564202883
    Format: 1 online resource (xii, 759 pages ) , illustrations
    Edition: 2nd ed.
    ISBN: 0-12-405916-3 , 0-12-405888-4
    Note: Front Cover -- Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan -- Copyright -- Dedication -- Contents -- Chapter 1: What's in This Book (Read This First!) -- 1.1 Real People Can Read This Book -- 1.1.1 Prerequisites -- 1.2 What's in This Book -- 1.2.1 You're busy. What's the least you can read? -- 1.2.2 You're really busy! Isn't there even less you can read? -- 1.2.3 You want to enjoy the view a little longer. But not too much longer -- 1.2.4 If you just gotta reject a null hypothesis… -- 1.2.5 Where's the equivalent of traditional test X in this book? -- 1.3 What's New in the Second Edition? -- 1.4 Gimme Feedback (Be Polite) -- 1.5 Thank You! -- Part I: The Basics: Models, Probability, Bayes' Rule, and R -- Chapter 2: Introduction: Credibility, Models, and Parameters -- 2.1 Bayesian Inference Is Reallocation of CredibilityAcross Possibilities -- 2.1.1 Data are noisy and inferences are probabilistic -- 2.2 Possibilities Are Parameter Values in Descriptive Models -- 2.3 The Steps of Bayesian Data Analysis -- 2.3.1 Data analysis without parametric models? -- 2.4 Exercises -- Chapter 3: The R Programming Language -- 3.1 Get the Software -- 3.1.1 A look at RStudio -- 3.2 A Simple Example of R in Action -- 3.2.1 Get the programs used with this book -- 3.3 Basic Commands and Operators in R -- 3.3.1 Getting help in R -- 3.3.2 Arithmetic and logical operators -- 3.3.3 Assignment, relational operators, and tests of equality -- 3.4 Variable Types -- 3.4.1 Vector -- 3.4.1.1 The combine function -- 3.4.1.2 Component-by-component vector operations -- 3.4.1.3 The colon operator and sequence function -- 3.4.1.4 The replicate function -- 3.4.1.5 Getting at elements of a vector -- 3.4.2 Factor -- 3.4.3 Matrix and array -- 3.4.4 List and data frame -- 3.5 Loading and Saving Data -- 3.5.1 The read.csv and read.table functions. , 3.5.2 Saving data from R -- 3.6 Some Utility Functions -- 3.7 Programming in R -- 3.7.1 Variable names in R -- 3.7.2 Running a program -- 3.7.3 Programming a function -- 3.7.4 Conditions and loops -- 3.7.5 Measuring processing time -- 3.7.6 Debugging -- 3.8 Graphical Plots: Opening and Saving -- 3.9 Conclusion -- 3.10 Exercises -- Chapter 4: What Is This Stuff Called Probability? -- 4.1 The Set of All Possible Events -- 4.1.1 Coin flips: Why you should care -- 4.2 Probability: Outside or Inside the Head -- 4.2.1 Outside the head: Long-run relative frequency -- 4.2.1.1 Simulating a long-run relative frequency -- 4.2.1.2 Deriving a long-run relative frequency -- 4.2.2 Inside the head: Subjective belief -- 4.2.2.1 Calibrating a subjective belief by preferences -- 4.2.2.2 Describing a subjective belief mathematically -- 4.2.3 Probabilities assign numbers to possibilities -- 4.3 Probability Distributions -- 4.3.1 Discrete distributions: Probability mass -- 4.3.2 Continuous distributions: Rendezvous with density -- 4.3.2.1 Properties of probability density functions -- 4.3.2.2 The normal probability density function -- 4.3.3 Mean and variance of a distribution -- 4.3.3.1 Mean as minimized variance -- 4.3.4 Highest density interval (HDI) -- 4.4 Two-Way Distributions -- 4.4.1 Conditional probability -- 4.4.2 Independence of attributes -- 4.5 Appendix: R Code for Figure 4.1 -- 4.6 Exercises -- Chapter 5: Bayes' Rule -- 5.1 Bayes' Rule -- 5.1.1 Derived from definitions of conditional probability -- 5.1.2 Bayes' rule intuited from a two-way discrete table -- 5.2 Applied to Parameters and Data -- 5.2.1 Data-order invariance -- 5.3 Complete Examples: Estimating Bias in a Coin -- 5.3.1 Influence of sample size on the posterior -- 5.3.2 Influence of the prior on the posterior -- 5.4 Why Bayesian Inference Can Be Difficult. , 5.5 Appendix: R Code for Figures 5.1, 5.2, etc. -- 5.6 Exercises -- Part II: All the Fundamentals Applied to Inferring a Binomial Probability -- Chapter 6: Inferring a Binomial Probability via Exact Mathematical Analysis -- 6.1 The Likelihood Function: Bernoulli Distribution -- 6.2 A Description of Credibilities: The Beta Distribution -- 6.2.1 Specifying a beta prior -- 6.3 The Posterior Beta -- 6.3.1 Posterior is compromise of prior and likelihood -- 6.4 Examples -- 6.4.1 Prior knowledge expressed as a beta distribution -- 6.4.2 Prior knowledge that cannot be expressed as a beta distribution -- 6.5 Summary -- 6.6 Appendix: R Code for Figure 6.4 -- 6.7 Exercises -- Chapter 7: Markov Chain Monte Carlo -- 7.1 Approximating a Distribution with a Large Sample -- 7.2 A Simple Case of the Metropolis Algorithm -- 7.2.1 A politician stumbles upon the Metropolis algorithm -- 7.2.2 A random walk -- 7.2.3 General properties of a random walk -- 7.2.4 Why we care -- 7.2.5 Why it works -- 7.3 The Metropolis Algorithm More Generally -- 7.3.1 Metropolis algorithm applied to Bernoulli likelihood and beta prior -- 7.3.2 Summary of Metropolis algorithm -- 7.4 Toward Gibbs Sampling: Estimating Two Coin Biases -- 7.4.1 Prior, likelihood and posterior for two biases -- 7.4.2 The posterior via exact formal analysis -- 7.4.3 The posterior via the Metropolis algorithm -- 7.4.4 Gibbs sampling -- 7.4.5 Is there a difference between biases? -- 7.4.6 Terminology: MCMC -- 7.5 MCMC Representativeness, Accuracy, and Efficiency -- 7.5.1 MCMC representativeness -- 7.5.2 MCMC accuracy -- 7.5.3 MCMC efficiency -- 7.6 Summary -- 7.7 Exercises -- Chapter 8: JAGS -- 8.1 JAGS and its Relation to R -- 8.2 A Complete Example -- 8.2.1 Load data -- 8.2.2 Specify model -- 8.2.3 Initialize chains -- 8.2.4 Generate chains -- 8.2.5 Examine chains -- 8.2.5.1 The plotPost function. , 8.3 Simplified Scripts for Frequently Used Analyses -- 8.4 Example: Difference of Biases -- 8.5 Sampling from the Prior Distribution in JAGS -- 8.6 Probability Distributions Available in JAGS -- 8.6.1 Defining new likelihood functions -- 8.7 Faster Sampling with Parallel Processing in RunJAGS -- 8.8 Tips for Expanding JAGS Models -- 8.9 Exercises -- Chapter 9: Hierarchical Models -- 9.1 A Single Coin from a Single Mint -- 9.1.1 Posterior via grid approximation -- 9.2 Multiple Coins from a Single Mint -- 9.2.1 Posterior via grid approximation -- 9.2.2 A realistic model with MCMC -- 9.2.3 Doing it with JAGS -- 9.2.4 Example: Therapeutic touch -- 9.3 Shrinkage in Hierarchical Models -- 9.4 Speeding up JAGS -- 9.5 Extending the Hierarchy: Subjects Within Categories -- 9.5.1 Example: Baseball batting abilities by position -- 9.6 Exercises -- Chapter 10: Model Comparison and Hierarchical Modeling -- 10.1 General Formula and the Bayes Factor -- 10.2 Example: Two Factories of Coins -- 10.2.1 Solution by formal analysis -- 10.2.2 Solution by grid approximation -- 10.3 Solution by MCMC -- 10.3.1 Nonhierarchical MCMC computation of each model'smarginal likelihood -- 10.3.1.1 Implementation with JAGS -- 10.3.2 Hierarchical MCMC computation of relative model probability -- 10.3.2.1 Using pseudo-priors to reduce autocorrelation -- 10.3.3 Models with different "noise" distributions in JAGS -- 10.4 Prediction: Model Averaging -- 10.5 Model Complexity Naturally Accounted for -- 10.5.1 Caveats regarding nested model comparison -- 10.6 Extreme Sensitivity to Prior Distribution -- 10.6.1 Priors of different models should be equally informed -- 10.7 Exercises -- Chapter 11: Null Hypothesis Significance Testing -- 11.1 Paved with Good Intentions -- 11.1.1 Definition of p value -- 11.1.2 With intention to fix N -- 11.1.3 With intention to fix z. , 11.1.4 With intention to fix duration -- 11.1.5 With intention to make multiple tests -- 11.1.6 Soul searching -- 11.1.7 Bayesian analysis -- 11.2 Prior Knowledge -- 11.2.1 NHST analysis -- 11.2.2 Bayesian analysis -- 11.2.2.1 Priors are overt and relevant -- 11.3 Confidence Interval and Highest Density Interval -- 11.3.1 CI depends on intention -- 11.3.1.1 CI is not a distribution -- 11.3.2 Bayesian HDI -- 11.4 Multiple Comparisons -- 11.4.1 NHST correction for experimentwise error -- 11.4.2 Just one Bayesian posterior no matter how you look at it -- 11.4.3 How Bayesian analysis mitigates false alarms -- 11.5 What a Sampling Distribution Is Good For -- 11.5.1 Planning an experiment -- 11.5.2 Exploring model predictions (posterior predictive check) -- 11.6 Exercises -- Chapter 12: Bayesian Approaches to Testing a Point ("Null") Hypothesis -- 12.1 The Estimation Approach -- 12.1.1 Region of practical equivalence -- 12.1.2 Some examples -- 12.1.2.1 Differences of correlated parameters -- 12.1.2.2 Why HDI and not equal-tailed interval? -- 12.2 The Model-Comparison Approach -- 12.2.1 Is a coin fair or not? -- 12.2.1.1 Bayes' factor can accept null with poor precision -- 12.2.2 Are different groups equal or not? -- 12.2.2.1 Model specification in JAGS -- 12.3 Relations of Parameter Estimation and Model Comparison -- 12.4 Estimation or Model Comparison? -- 12.5 Exercises -- Chapter 13: Goals, Power, and Sample Size -- 13.1 The Will to Power -- 13.1.1 Goals and obstacles -- 13.1.2 Power -- 13.1.3 Sample size -- 13.1.4 Other expressions of goals -- 13.2 Computing Power and Sample Size -- 13.2.1 When the goal is to exclude a null value -- 13.2.2 Formal solution and implementation in R -- 13.2.3 When the goal is precision -- 13.2.4 Monte Carlo approximation of power -- 13.2.5 Power from idealized or actual data. , 13.3 Sequential Testing and the Goal of Precision.
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    Amsterdam :Academic Press is an imprint of Elsevier,
    UID:
    almahu_9947366898402882
    Format: 1 online resource (xii, 759 pages ) , illustrations
    Edition: 2nd ed.
    ISBN: 0-12-405916-3 , 0-12-405888-4
    Note: Front Cover -- Doing Bayesian Data Analysis: A Tutorial with R, JAGS, and Stan -- Copyright -- Dedication -- Contents -- Chapter 1: What's in This Book (Read This First!) -- 1.1 Real People Can Read This Book -- 1.1.1 Prerequisites -- 1.2 What's in This Book -- 1.2.1 You're busy. What's the least you can read? -- 1.2.2 You're really busy! Isn't there even less you can read? -- 1.2.3 You want to enjoy the view a little longer. But not too much longer -- 1.2.4 If you just gotta reject a null hypothesis… -- 1.2.5 Where's the equivalent of traditional test X in this book? -- 1.3 What's New in the Second Edition? -- 1.4 Gimme Feedback (Be Polite) -- 1.5 Thank You! -- Part I: The Basics: Models, Probability, Bayes' Rule, and R -- Chapter 2: Introduction: Credibility, Models, and Parameters -- 2.1 Bayesian Inference Is Reallocation of CredibilityAcross Possibilities -- 2.1.1 Data are noisy and inferences are probabilistic -- 2.2 Possibilities Are Parameter Values in Descriptive Models -- 2.3 The Steps of Bayesian Data Analysis -- 2.3.1 Data analysis without parametric models? -- 2.4 Exercises -- Chapter 3: The R Programming Language -- 3.1 Get the Software -- 3.1.1 A look at RStudio -- 3.2 A Simple Example of R in Action -- 3.2.1 Get the programs used with this book -- 3.3 Basic Commands and Operators in R -- 3.3.1 Getting help in R -- 3.3.2 Arithmetic and logical operators -- 3.3.3 Assignment, relational operators, and tests of equality -- 3.4 Variable Types -- 3.4.1 Vector -- 3.4.1.1 The combine function -- 3.4.1.2 Component-by-component vector operations -- 3.4.1.3 The colon operator and sequence function -- 3.4.1.4 The replicate function -- 3.4.1.5 Getting at elements of a vector -- 3.4.2 Factor -- 3.4.3 Matrix and array -- 3.4.4 List and data frame -- 3.5 Loading and Saving Data -- 3.5.1 The read.csv and read.table functions. , 3.5.2 Saving data from R -- 3.6 Some Utility Functions -- 3.7 Programming in R -- 3.7.1 Variable names in R -- 3.7.2 Running a program -- 3.7.3 Programming a function -- 3.7.4 Conditions and loops -- 3.7.5 Measuring processing time -- 3.7.6 Debugging -- 3.8 Graphical Plots: Opening and Saving -- 3.9 Conclusion -- 3.10 Exercises -- Chapter 4: What Is This Stuff Called Probability? -- 4.1 The Set of All Possible Events -- 4.1.1 Coin flips: Why you should care -- 4.2 Probability: Outside or Inside the Head -- 4.2.1 Outside the head: Long-run relative frequency -- 4.2.1.1 Simulating a long-run relative frequency -- 4.2.1.2 Deriving a long-run relative frequency -- 4.2.2 Inside the head: Subjective belief -- 4.2.2.1 Calibrating a subjective belief by preferences -- 4.2.2.2 Describing a subjective belief mathematically -- 4.2.3 Probabilities assign numbers to possibilities -- 4.3 Probability Distributions -- 4.3.1 Discrete distributions: Probability mass -- 4.3.2 Continuous distributions: Rendezvous with density -- 4.3.2.1 Properties of probability density functions -- 4.3.2.2 The normal probability density function -- 4.3.3 Mean and variance of a distribution -- 4.3.3.1 Mean as minimized variance -- 4.3.4 Highest density interval (HDI) -- 4.4 Two-Way Distributions -- 4.4.1 Conditional probability -- 4.4.2 Independence of attributes -- 4.5 Appendix: R Code for Figure 4.1 -- 4.6 Exercises -- Chapter 5: Bayes' Rule -- 5.1 Bayes' Rule -- 5.1.1 Derived from definitions of conditional probability -- 5.1.2 Bayes' rule intuited from a two-way discrete table -- 5.2 Applied to Parameters and Data -- 5.2.1 Data-order invariance -- 5.3 Complete Examples: Estimating Bias in a Coin -- 5.3.1 Influence of sample size on the posterior -- 5.3.2 Influence of the prior on the posterior -- 5.4 Why Bayesian Inference Can Be Difficult. , 5.5 Appendix: R Code for Figures 5.1, 5.2, etc. -- 5.6 Exercises -- Part II: All the Fundamentals Applied to Inferring a Binomial Probability -- Chapter 6: Inferring a Binomial Probability via Exact Mathematical Analysis -- 6.1 The Likelihood Function: Bernoulli Distribution -- 6.2 A Description of Credibilities: The Beta Distribution -- 6.2.1 Specifying a beta prior -- 6.3 The Posterior Beta -- 6.3.1 Posterior is compromise of prior and likelihood -- 6.4 Examples -- 6.4.1 Prior knowledge expressed as a beta distribution -- 6.4.2 Prior knowledge that cannot be expressed as a beta distribution -- 6.5 Summary -- 6.6 Appendix: R Code for Figure 6.4 -- 6.7 Exercises -- Chapter 7: Markov Chain Monte Carlo -- 7.1 Approximating a Distribution with a Large Sample -- 7.2 A Simple Case of the Metropolis Algorithm -- 7.2.1 A politician stumbles upon the Metropolis algorithm -- 7.2.2 A random walk -- 7.2.3 General properties of a random walk -- 7.2.4 Why we care -- 7.2.5 Why it works -- 7.3 The Metropolis Algorithm More Generally -- 7.3.1 Metropolis algorithm applied to Bernoulli likelihood and beta prior -- 7.3.2 Summary of Metropolis algorithm -- 7.4 Toward Gibbs Sampling: Estimating Two Coin Biases -- 7.4.1 Prior, likelihood and posterior for two biases -- 7.4.2 The posterior via exact formal analysis -- 7.4.3 The posterior via the Metropolis algorithm -- 7.4.4 Gibbs sampling -- 7.4.5 Is there a difference between biases? -- 7.4.6 Terminology: MCMC -- 7.5 MCMC Representativeness, Accuracy, and Efficiency -- 7.5.1 MCMC representativeness -- 7.5.2 MCMC accuracy -- 7.5.3 MCMC efficiency -- 7.6 Summary -- 7.7 Exercises -- Chapter 8: JAGS -- 8.1 JAGS and its Relation to R -- 8.2 A Complete Example -- 8.2.1 Load data -- 8.2.2 Specify model -- 8.2.3 Initialize chains -- 8.2.4 Generate chains -- 8.2.5 Examine chains -- 8.2.5.1 The plotPost function. , 8.3 Simplified Scripts for Frequently Used Analyses -- 8.4 Example: Difference of Biases -- 8.5 Sampling from the Prior Distribution in JAGS -- 8.6 Probability Distributions Available in JAGS -- 8.6.1 Defining new likelihood functions -- 8.7 Faster Sampling with Parallel Processing in RunJAGS -- 8.8 Tips for Expanding JAGS Models -- 8.9 Exercises -- Chapter 9: Hierarchical Models -- 9.1 A Single Coin from a Single Mint -- 9.1.1 Posterior via grid approximation -- 9.2 Multiple Coins from a Single Mint -- 9.2.1 Posterior via grid approximation -- 9.2.2 A realistic model with MCMC -- 9.2.3 Doing it with JAGS -- 9.2.4 Example: Therapeutic touch -- 9.3 Shrinkage in Hierarchical Models -- 9.4 Speeding up JAGS -- 9.5 Extending the Hierarchy: Subjects Within Categories -- 9.5.1 Example: Baseball batting abilities by position -- 9.6 Exercises -- Chapter 10: Model Comparison and Hierarchical Modeling -- 10.1 General Formula and the Bayes Factor -- 10.2 Example: Two Factories of Coins -- 10.2.1 Solution by formal analysis -- 10.2.2 Solution by grid approximation -- 10.3 Solution by MCMC -- 10.3.1 Nonhierarchical MCMC computation of each model'smarginal likelihood -- 10.3.1.1 Implementation with JAGS -- 10.3.2 Hierarchical MCMC computation of relative model probability -- 10.3.2.1 Using pseudo-priors to reduce autocorrelation -- 10.3.3 Models with different "noise" distributions in JAGS -- 10.4 Prediction: Model Averaging -- 10.5 Model Complexity Naturally Accounted for -- 10.5.1 Caveats regarding nested model comparison -- 10.6 Extreme Sensitivity to Prior Distribution -- 10.6.1 Priors of different models should be equally informed -- 10.7 Exercises -- Chapter 11: Null Hypothesis Significance Testing -- 11.1 Paved with Good Intentions -- 11.1.1 Definition of p value -- 11.1.2 With intention to fix N -- 11.1.3 With intention to fix z. , 11.1.4 With intention to fix duration -- 11.1.5 With intention to make multiple tests -- 11.1.6 Soul searching -- 11.1.7 Bayesian analysis -- 11.2 Prior Knowledge -- 11.2.1 NHST analysis -- 11.2.2 Bayesian analysis -- 11.2.2.1 Priors are overt and relevant -- 11.3 Confidence Interval and Highest Density Interval -- 11.3.1 CI depends on intention -- 11.3.1.1 CI is not a distribution -- 11.3.2 Bayesian HDI -- 11.4 Multiple Comparisons -- 11.4.1 NHST correction for experimentwise error -- 11.4.2 Just one Bayesian posterior no matter how you look at it -- 11.4.3 How Bayesian analysis mitigates false alarms -- 11.5 What a Sampling Distribution Is Good For -- 11.5.1 Planning an experiment -- 11.5.2 Exploring model predictions (posterior predictive check) -- 11.6 Exercises -- Chapter 12: Bayesian Approaches to Testing a Point ("Null") Hypothesis -- 12.1 The Estimation Approach -- 12.1.1 Region of practical equivalence -- 12.1.2 Some examples -- 12.1.2.1 Differences of correlated parameters -- 12.1.2.2 Why HDI and not equal-tailed interval? -- 12.2 The Model-Comparison Approach -- 12.2.1 Is a coin fair or not? -- 12.2.1.1 Bayes' factor can accept null with poor precision -- 12.2.2 Are different groups equal or not? -- 12.2.2.1 Model specification in JAGS -- 12.3 Relations of Parameter Estimation and Model Comparison -- 12.4 Estimation or Model Comparison? -- 12.5 Exercises -- Chapter 13: Goals, Power, and Sample Size -- 13.1 The Will to Power -- 13.1.1 Goals and obstacles -- 13.1.2 Power -- 13.1.3 Sample size -- 13.1.4 Other expressions of goals -- 13.2 Computing Power and Sample Size -- 13.2.1 When the goal is to exclude a null value -- 13.2.2 Formal solution and implementation in R -- 13.2.3 When the goal is precision -- 13.2.4 Monte Carlo approximation of power -- 13.2.5 Power from idealized or actual data. , 13.3 Sequential Testing and the Goal of Precision.
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Did you mean 9780124059320?
Did you mean 9780124079106?
Did you mean 9780124095960?
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages