## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

February 15, 2013

Title Hidden Markov Models Version 1.7-0 Date 2012-04-14 Author David Harte Maintainer David Harte <david@statsresearch.co.nz> Description Contains functions for the analysis of Discrete Time Hidden Markov Models, Markov Modulated GLMs and the Markov Modulated Poisson Process. It includes functions for simulation, parameter estimation, and the Viterbi algorithm. See the topic ‘‘HiddenMarkov’’ for an introduction to the package, and ‘‘Changes’’ for a list of recent changes. The algorithms are based of those of Walter Zucchini. Suggests snow License GPL (>= 2) URL http://cran.at.r-project.org/web/packages/HiddenMarkov Repository CRAN Date/Publication 2012-04-14 05:46:53 NeedsCompilation yes

R topics documented:

BaumWelch . . bwcontrol . . . Change Log . . compdelta . . . Demonstration . dthmm . . . . . Estep . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 . 4 . 5 . 8 . 9 . 9 . 15

2 forwardback . . . . . . . . . HiddenMarkov . . . . . . . logLik . . . . . . . . . . . . mchain . . . . . . . . . . . . mmglm . . . . . . . . . . . mmglm-2nd-level-functions mmpp . . . . . . . . . . . . mmpp-2nd-level-functions . Mstep . . . . . . . . . . . . neglogLik . . . . . . . . . . probhmm . . . . . . . . . . residuals . . . . . . . . . . . simulate . . . . . . . . . . . summary . . . . . . . . . . Transform-Parameters . . . . Viterbi . . . . . . . . . . . . Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

BaumWelch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 19 21 22 23 30 30 32 32 37 41 42 44 46 47 48 51

BaumWelch

Estimation Using Baum-Welch Algorithm

Description Estimates the parameters of a hidden Markov model. The Baum-Welch algorithm (Baum et al, 1970) referred to in the HMM literature is a version of the EM algorithm (Dempster et al, 1977). See Hartley (1958) for an earlier application of the EM methodology, though not referred to as such. Usage BaumWelch(object, control, ...) ## S3 method for class ’dthmm’ BaumWelch(object, control = bwcontrol(), ## S3 method for class ’mmglm ’ BaumWelch(object, control = bwcontrol(), ## S3 method for class ’mmglm1’ BaumWelch(object, control = bwcontrol(), ## S3 method for class ’mmglmlong1’ BaumWelch(object, control = bwcontrol(), tmpfile=NULL, ...) ## S3 method for class ’mmpp’ BaumWelch(object, control = bwcontrol(), Arguments object control an object of class "dthmm", "mmglm ", "mmglm1", "mmglmlong1", or "mmpp". a list of control settings for the iterative process. These can be changed by using the function bwcontrol.

...) ...) ...) SNOWcluster=NULL,

...)

BaumWelch SNOWcluster tmpfile ... Details see section below called “Parallel Processing”.

3

name of a ﬁle (.Rda) into which estimates are written at each 10th iteration. The model object is called object. If NULL (default), no ﬁle is created. other arguments.

The initial parameter values used by the EM algorithm are those that are contained within the input object. The code for the methods "dthmm", "mmglm ", "mmglm1","mmglmlong1", and "mmpp" can be viewed by typing BaumWelch.dthmm, BaumWelch.mmglm , BaumWelch.mmglm1, BaumWelch.mmglmlong1 or BaumWelch.mmpp, respectively, on the R command line. Value The output object (a list) with have the same class as the input, and will have the same components. The parameter values will be replaced by those estimated by this function. The object will also contain additional components. An object of class "dthmm" will also contain u v LL iter diff Parallel Processing In longitudinal models, the forward and backward equations need to be calculated for each individual subject. These can be done independently, the results being concatenated to be used in the E-step. If the argument SNOWcluster is set, subjects are divided equally between each node in the cluster for the calculation of the forward and backward equations. This division is very basic, and assumes that all nodes run at a roughly comparable speed. If the communication between nodes is slow and the dataset is small, then the time taken to allocate the work to the various nodes may in fact take more time than simply using one processor to perform all of the calculations. The required steps in initiating parallel processing are as follows. # load the "snow" package library(snow) # define the SNOW cluster object, e.g. a SOCK cluster # where each node has the same R installation. cl <- makeSOCKcluster(c("localhost", "horoeka.localdomain", an n × m matrix containing estimates of the conditional expectations. See “Details” in Estep. an n × m × m array containing estimates of the conditional expectations. See “Details” in Estep. value of log-likelihood at the end. number of iterations performed. difference between ﬁnal and previous log-likelihood.

4 "horoeka.localdomain", "localhost")) # A more general setup: Totara is Fedora, Rimu is Debian: # Use 2 processors on Totara, 1 on Rimu: totara <- list(host="localhost", rscript="/usr/lib/R/bin/Rscript", snowlib="/usr/lib/R/library") rimu <- list(host="rimu.localdomain", rscript="/usr/lib/R/bin/Rscript", snowlib="/usr/local/lib/R/site-library") cl <- makeCluster(list(totara, totara, rimu), type="SOCK") # then define the required model object # say the model object is called x BaumWelch(x, SNOWcluster=cl) # stop the R jobs on the slave machines stopCluster(cl)

bwcontrol

Note that the communication method does not need to be SOCKS; see the snow package documentation, topic snow-startstop, for other options. Further, if some nodes are on other machines, the ﬁrewalls may need to be tweaked. The master machine initiates the R jobs on the slave machines by communicating through port 22 (use of security keys are needed rather than passwords), and subsequent communications through port 10187. Again, these details can be tweaked in the options settings within the snow package. References Cited references are listed on the HiddenMarkov manual page. See Also logLik, residuals, simulate, summary, neglogLik

bwcontrol

Control Parameters for Baum Welch Algorithm

Description Creates a list of parameters that control the operation of BaumWelch. Usage bwcontrol(maxiter = 5 , tol = 1e- 5, prt = TRUE, posdiff = TRUE, converge = expression(diff < tol))

Change Log Arguments maxiter tol prt posdiff converge

5

is the maximum number of iterations, default is 500. is the convergence criterion, default is 0.00001. is logical, and determines whether information is printed at each iteration; default is TRUE. is logical, and determines whether the iterative process stops if a negative loglikelihood difference occurs, default is TRUE. is an expression giving the convergence criterion. The default is the difference between successive values of the log-likelihood.

Examples

# Increase the maximum number of iterations to 1 . # All other components will retain their default values. a <- bwcontrol(maxiter=1 ) print(a)

Change Log

Changes Made to Package HiddenMarkov

Description This page contains a listing of recent changes made to the package. Details 1. Since we have included different classes of HMMs (see dthmm, mmglm and mmpp), it is much tidier to use an object orientated approach. This ensures that the functions across all models follow a more consistent naming convention, and also the argument list for the different model functions are more simple and consistent (see HiddenMarkov). (14 Sep 2007) 2. The main tasks (model ﬁtting, residuals, simulation, Viterbi, etc) can now be called by generic functions (see topic HiddenMarkov). The package documentation has been rearranged so that these generic functions contain the documentation for all model types (e.g. see BaumWelch). (14 Sep 2007) 3. There are a number of functions, still contained in the package, that are obsolete. This is either because they do not easily ﬁt into the current naming convention used to implement the more object orientated approach, or their argument list is slightly complicated. These functions have been grouped in the topics dthmm.obsolete and mmpp.obsolete. (14 Sep 2007) 4. There are various second level functions. For example, the model ﬁtting is achieved by the generic BaumWelch function. However, this will call functions to do the E-step, M-step, forward and backward probabilities, and so on. At the moment, these second level functions have not been modiﬁed into an object orientated approach. It is not clear at this point whether this would be advantageous. If one went down this route, then one would probably group all of the E-step functions (for all models) under the same topic. If not, then it may be best to group all second level functions for each model under the same topic (e.g. forwardback, probhmm and Estep would be grouped together, being the second level functions for the dthmm model). (14 Sep 2007)

(24 Nov 2007) 9. BaumWelch.max. These loops in R used far more time than the forward and backward equations.mmpp: for loop replaced by Fortran code.. mmpp: argument nonstat was not set correctly.mmglm . forwardback. have a look at the examples in topic dthmm. mmglm : remove some LaTeX speciﬁc formatting to be compatible with R 2. much faster.mmpp: inclusion of all variable sized arrays in the Fortran subroutine call to be compatible with non gfortran compilers (3 Dec 2007).0. more added for calls to Fortran subroutines multi1 and multi2. 14.dthmm. you will get warning messages stating that certain functions are deprecated. (23 Jun 2008) 20. This hides them from the listing of package functions. (18 Feb 2008) 17. forwardback. (26 Jan 2009) 25. forwardback. (5 Dec 2007) 13. logLik. Programming code that uses old versions of the functions should still work with this revised version of the package.9. logLik. stops if not set for unknown distributions. To get a quick overview of the programming style. neglogLik: argument updatep has been renamed to pmap. However. neglogLik: format of this function changed to be consistent with that in package PtProcess.mmpp: error in Fortran code of loop 6..mmpp and Estep. (21 Jun 2008) 18. (27 Nov 2007) 10. (11 Dec 2007) & control$posdiff) stop . All cited references are now only listed in the topic HiddenMarkov. Argument p renamed as params. (3 Jul 2008) 21.. Estep. Hyperlinks on package vignettes page.mmpp: if (diff < ) stop . The corresponding R code is still contained within the function in case the Fortran has incompatibility issues.mmpp: for loops replaced by Fortran code. Pi2vector. (3 Dec 2007) 11.dthmm. (9 Jul 2008) 23. The original function called Viterbi has been renamed to Viterbihmm. logLik. dthmm: argument nonstat was not set correctly. vector2Pi. forwardback and Estep. forwardback: for loops replaced by Fortran code. (10 Oct 2009) 26.mmpp. (14 Sep 2007) 6. The corresponding R code is still contained within the function in case the Fortran has incompatibility issues. dthmm: argument discrete set automatically for known distributions. (3 Jul 2008) 22.mmpp: argument fortran added.. Viterbi: Correct hyperlink to base function which.. and suggesting a possible alternative. Estep. The manual pages HiddenMarkov-dthmm-deprecated and HiddenMarkov-mmpp-deprecated have been given a keyword of “internal”. (07 Aug 2008) 24. to if (diff < consistent with BaumWelch. Q2vector. The corresponding R code is still contained within the function in case the Fortran has incompatibility issues. (09 Nov 2007) 7. j1= to j1=1. neglogLik.mmpp: for loops replaced by Fortran code. (6 Dec 2007) 12. vector2Q: new functions providing an alternative means of calculating maximum likelihood parameter estimates. (22 Jun 2008) 19. and Viterbi is now a generic function. (23 Nov 2007) 8. (15 Feb 2008) 15. (15 Feb 2008) 16. Cuts time considerably. Tidied HTML representation of equations in manual pages. forwardback.6 Change Log 5. (15 Dec 2009) .

makedistn: Change eval(parse(text=paste(x. pmmglm: Change all log arguments to log. This function violates the CRAN policy as users are not supposed to change the NAMESPACE on such packages. logLik. BaumWelch. (7 Mar 2012) 49. See manual page for more details. probhmm: Modify for greater efﬁciency and generality to accommodate more general models.only". modify. pglm. (30 Jul 2010) 36. mmglm and neglogLik: Restrict the number of iterations in examples on manual pages to minimise time during package checks. (19 Dec 2010) 44. Mstep: Revised documentation about distribution requirements and ways to include other distributions into the software framework.p. Some examples where it is required to modify package functions will not work. (05 Aug 2010) 41.R. Arguments of probhmm have also been changed. (5 Jan 2010) 28. residuals. Generates an error message when applied to objects of class "mmpp".mmglm1": Fixed bug with number of Bernoulli trials speciﬁcation when using a binomial family. (30 Jul 2010) 33. (02 May 2011) 45. (30 Jul 2010) 34. (03 Aug 2010) 37. Method not currently available. (14 Apr 2012) . logLik: Method added for object of class "mmglmlong1".mmpp to perform calculations. Add CITATION ﬁle.dthmm: Calls forwardback. mmglmlong1: new function for longitudinal data. (19 Dec 2011) 48.Change Log 7 27. residualshmm: Made defunct. (05 Aug 2010) 38. logLik. List functions explicitly in NAMESPACE.mmpp: Calls forwardback. (14 Apr 2012) 50. (18 Jan 2010) 29. the second example in Mstep. " list(log=log)))". dthmm: clarify argument distn on manual page. modify. "Viterbi. (5 Nov 2011) 46. currently no method available.mmpp: New argument "fwd. (19 Dec 2011) 47. forwardback. Implement very basic NAMESPACE and remove ﬁle /R/zzz. (29 Jul 2010) 32. Revise examples in /tests directory.dthmm. (04 Feb 2010) 30. (05 Aug 2010) 39. See the second example in Mstep for a work-around when package functions need to be modiﬁed. residuals: Methods added for objects of class "mmglm1" and "mmglmlong1".dthmm. incompatible with revised probhmm.func: Function removed. new version mmglm1. sep=""))) to eval(parse(text=paste( (19 Dec 2010) 43. and nature of parameter estimates when the Markov chain is stationary. forwardback.dthmm to perform calculations. (30 Jul 2010) 35. (13 Feb 2010) 31.mmglmlong1: new argument tmpfile added. (05 Aug 2010) 40. for example. mmglm: Renamed to mmglm . Viterbi: Now generates an error message when applied to objects of class "mmpp". (24 Sep 2010) 42. Viterbi: Methods added for objects of class "mmglm1" and "mmglmlong1".func: New function to allow the user to modify package functions in the NAMESPACE set-up.

print(compdelta(Pi)) . 1/3. 1/2. It would be more consistent to attach to these original functions a dthmm sufﬁx. The demonstration examples are all for dthmm. j Value A numeric vector of length m containing the marginal probabilities. The functions Viterbi and residuals need methods for objects of class mmpp. 1/3. · · · . δj = 1. compdelta Marginal Distribution of Stationary Markov Chain Description Computes the marginal distribution of a stationary Markov chain with transition probability matrix Π. 2. Obviously. 1/3. m.mmpp. . byrow=TRUE. A number of the original functions have names that are too general. . 1/3. but only for the model dthmm. Also need some for mmglm1. . For example forwardback calculates the forward-backward probabilities.8 Future Development compdelta 1. 1/2). 1/3. . The m discrete states of the Markov chain are denoted by 1. 1/3. . 3. 1/3. The corresponding function for the mmpp model is forwardback. mmglmlong1 and mmpp. then the marginal distribution δ satisﬁes δ = δΠ . . . Examples Pi <. 1/2. . 1/3. .matrix(c(1/2. 1/3. nrow=5) . m is the m × m transition probability matrix of the Markov chain. Usage compdelta(Pi) Arguments Pi Details If the Markov chain is stationary. . .

nonstat = TRUE) Arguments x is a vector of length n containing the univariate observed process. discrete = NULL. Alternatively. Examples # Model with class "dthmm" with the Beta distribution demo("beta". x could be speciﬁed as NULL. package="HiddenMarkov") # Model with class "dthmm" with the Gamma distribution demo("gamma". package="HiddenMarkov") # Model with class "dthmm" with the Gaussian distribution demo("norm". delta. is the marginal probability distribution of the m hidden states at the ﬁrst time point. pn = NULL. package="HiddenMarkov") # Model with class "dthmm" with the Log Normal distribution demo("lnorm". meaning that the data will be added later (e. Usage dthmm(x. pm.g. Pi delta . Pi. simulated). package="HiddenMarkov") dthmm Discrete Time HMM Object (DTHMM) Description Creates a discrete time hidden Markov model object with class "dthmm". package="HiddenMarkov") # Model with class "dthmm" with the Logistic distribution demo("logis".Demonstration 9 Demonstration Demonstration Examples Description Demonstration examples can be run by executing the code below. distn. is the m × m transition probability matrix of the homogeneous hidden Markov chain. The observed process is univariate.

Xi ) where i = 1. T . · · · . i. · · · . and j. GammaDist ("gamma"). List Object pm The list object pm contains parameter values for the probability distribution of the observed process that are dependent on the hidden Markov state. Distributions provided by the package are Beta ("beta"). . all time points). where the (j. is a list object containing the observation dependent parameter values associated with the distribution of the observed process (see below). for each j and k . TRUE if the homogeneous Markov chain is assumed to be non-stationary. m. 7. Section “Modiﬁcations and Extensions”. we use i = 1. 4. to extend to other distributions. The history of the observed process up to time i is denoted by X (i) . Exponential ("exp"). m. containing the above arguments as named components. The Markov chain is said to be stationary if the marginal distribution is the same over time. · · · . for each j . · · · . 6. pm pn discrete nonstat Value A list object with class "dthmm". n. The hidden Markov chain has m states denoted by 1. The marginal distribution is denoted by δ = (δ1 . πjk is constant over time. Normal ("norm"). Lognormal ("lnorm"). · · · . n. n. i. is a list object containing the (Markov dependent) parameter values associated with the distribution of the observed process (see below). Notation 1. MacDonald & Zucchini (1997) use t to denote the time. The Markov chain is assumed to be homogeneous. k = 1. and the hidden Markov chain as {Ci }. is logical. Logistic ("logis"). 5. n. is logical.e. 2. To avoid confusion with other uses of t and T in R. Binomial ("binom"). · · · . Set automatically for distributions already contained in the package. See section “Stationarity” below. and is TRUE if distn is a discrete distribution. See topic Mstep. 3. · · · . i. We denote the observed sequence as {Xi }. δj = Pr{Ci = j } is constant for all i.e. i = 1. See “Modiﬁcations” in topic Mstep when some do not require estimation. The Markov chain transition probability matrix is denoted by Π.e. k )th element is πjk = Pr{Ci+1 = k | Ci = j } for all i (i. · · · . δm ).10 distn dthmm is a character string with the abbreviated distribution name. i = 1.e. where t = 1. Similarly for C (i) . These parameters are generally required to be estimated. · · · . and Poisson ("pois"). default. X (i) = (X1 .

i. · · · . then sd would be contained in pn (see below). Note that a given parameter can only occur within one of pm or pn. If. Each must be speciﬁed in either pm or pn. The names are determined by the required probability distribution. substituting model parameters. if distn == "norm". are determined by the required probability distribution. the number of Bernoulli trials is known at each time point. . For example. List Object pn The list object pn contains parameter values of the probability distribution for the observed process that are dependent on the observation number or “time”. Here mean would be speciﬁed within pm and sd within pn. with the means varying according to the hidden Markov state. which are mean and sd. One could also have a situation where the observed process was Gaussian. Cn = cn } . the arguments names must coincide with those used by the functions dnorm or rnorm. being the parameter name in the function dpois. sd was “time” dependent. is Lc = Pr{X1 = x1 . These are both vectors of length m containing the means and standard deviations of the observed process when the hidden Markov chain is in each of the m states. and that there are parameters that are dependent on the hidden Markov state. we get n Lc = δc1 πc1 c2 πc2 c3 · · · πcn−1 cn i=1 Pr{Xi = xi | Ci = ci } . and that there are parameters that are dependent on the observation number or time. Lc . then pm should have one component named lambda. C1 = c1 . This can be shown to be n Pr{X1 = x1 | C1 = c1 } Pr{C1 = c1 } i=2 Pr{Xi = xi | Ci = ci } Pr{Ci = ci | Ci−1 = ci−1 } . Complete Data Likelihood The “complete data likelihood”. in the observed process we may count the number of successes in a known number of Bernoulli trials. For example. Then the list object pn should contain named vector components each of length n. and so log Lc = log δc1 + n n log πci−1 ci + i=2 i=1 log Pr{Xi = xi | Ci = ci } . the vector component should still be within a list and named. and hence. Even if there is only one parameter. · · · . The prob parameter of rbinom (or dbinom) would be speciﬁed in pm and the size parameter would speciﬁed in pn. If distn == "pois". If they both vary according to the hidden Markov state then pm should have the named components mean and sd.e. The names. but the variances varying non-randomly according to the observation number (or vice versa). but the probability of success varies according to a hidden Markov state. as in pm. for example.dthmm 11 Assume that the hidden Markov chain has m states. Xn = xn . Then the list object pm should contain named vector components each of length m. Assume that the observed process is of length n. These parameters are assumed to be known.

1/2. byrow=TRUE. in that δ only occurs in the ﬁrst term. which is assumed to be relatively small. Π only occurs in the second term. 1/3.dthmm(NULL. we deal with this in a slightly ad-hoc manner by effectively disregarding the ﬁrst term. each term can be maximised separately. list(mean=c(1. δ = Π δ (see topic compdelta).5))) # use above parameter values as initial values y <. nrow=3) delta <. then δ is estimated using the relation δ = δ Π. see ﬁrst example in neglogLik. the likelihood can easily be maximised by maximising each term individually. 1/3. using the BaumWelch function will only provide approximate maximum likelihood estimates. as the ﬁrst term is effectively a constraint. In our implementation of the EM algorithm. In the M-step. delta. nsim=1 ) .12 dthmm Hence the “complete data likelihood” is split into three terms: the ﬁrst relates to parameters of the marginal distribution (Markov chain). 1/2.BaumWelch(x) print(summary(y)) print(logLik(y)) hist(residuals(y)) # check parameter estimates print(sum(y$delta)) . the estimated parameters using BaumWelch will be the “exact” maximum likelihood estimates. the second to the transition probabilities. 1/3. . Stationarity When the hidden Markov chain is assumed to be non-stationary. References Cited references are listed on the HiddenMarkov manual page. x <. In this situation. 6. and then the ﬁrst two terms of the complete data likelihood determine the transition probabilities Π. When the hidden Markov chain is assumed to be stationary.simulate(x. Pi. sd=c( .c( . This raises more complicated numerical problems. and the third to the distribution parameters of the observed random variable. 1.5. the transition matrix is determined by the second term. 1/2). When the Markov chain is non-stationary.matrix(c(1/2. . Hence. 1. ) x <. the complete data likelihood has a neat structure. "norm". Exact solutions can be calculated by directly maximising the likelihood function. Examples #----Test Gaussian Distribution ----- Pi <. and the parameters associated with the observed probabilities only occur in the third term. Hence. 3).

8.3.7). nrow=2) delta <.2. "exp".BaumWelch(x) print(summary(y)) print(logLik(y)) hist(residuals(y)) # check parameter estimates print(sum(y$delta)) print(y$Pi %*% rep(1.1)). ncol(y$Pi))) #----- Test Exponential Distribution ----- Pi <.dthmm(NULL.3. discrete = TRUE) x <. ncol(y$Pi))) 13 #----- Test Poisson Distribution ----- Pi <.2. .c( . ncol(y$Pi))) #----- Test Beta Distribution ----- .simulate(x.c( .matrix(c( . list(rate=c(2.matrix(c( . Pi. .1))) # use above parameter values as initial values y <. . 1) x <. 1) x <.7).BaumWelch(x) print(summary(y)) print(logLik(y)) hist(residuals(y)) # check parameter estimates print(sum(y$delta)) print(y$Pi %*% rep(1. "pois". . byrow=TRUE.dthmm(NULL. nsim=1 ) . Pi. byrow=TRUE. list(lambda=c(4. # use above parameter values as initial values y <. x <.simulate(x. delta. .8. nrow=2) delta <. delta. . nsim=1 ) .dthmm print(y$Pi %*% rep(1.

Pi.BaumWelch(x) print(summary(y)) print(logLik(y)) hist(residuals(y)) # check parameter estimates print(sum(y$delta)) print(y$Pi %*% rep(1.2. Pi.BaumWelch(x) print(summary(y)) print(logLik(y)) hist(residuals(y)) # check parameter estimates print(sum(y$delta)) print(y$Pi %*% rep(1. ncol(y$Pi))) #----- Test Gamma Distribution ----- . 1) x <. pn. "beta". . . . nrow=2) delta <.7). .simulate(x. "binom".c( .3. byrow=TRUE. byrow=TRUE. 6).14 dthmm Pi <. list(prob=c( .7). # use above parameter values as initial values y <.8)). 1) # vector of "fixed & known" number of Bernoulli trials pn <. nrow=2) delta <.matrix(c( .simulate(x.dthmm(NULL.list(size=rpois(1 . shape2=c(6.dthmm(NULL. . list(shape1=c(2. delta. nsim=1 ) . ncol(y$Pi))) #----- Test Binomial Distribution ----- Pi <. 2))) x <. delta.matrix(c( .2. nsim=1 ) # use above parameter values as initial values y <.2. discrete=TRUE) x <.c( .8. .8. 1 )+1) x <.3.

Usage Estep(x. If the distribution is speciﬁed as "wxyz" then a probability (or density) function called "dwxyz" should be available. col="blue". . in the standard R format (e. pm) x <. .Estep Pi <. pn = NULL) Arguments x Pi distn is a vector of length n containing the observed process. "norm" or "pois". Pi. Pi.BaumWelch(x) print(summary(y)) print(logLik(y)) hist(residuals(y)) # check parameter estimates print(sum(y$delta)) print(y$Pi %*% rep(1. 1) plot(x. dgamma(x. shape=pm$shape[1]).2. is a character string with the distribution name.list(rate=c(4.3. nsim=1 ) # use above parameter values as initial values y <.8. type="l". . 1) pm <. rate=pm$rate[1].7). ncol(y$Pi))) Estep E-Step of EM Algorithm for DTHMM Description Performs the expectation step of the EM algorithm for a dthmm process.seq( . delta. shape=pm$shape[2]).matrix(c( . e. distn. . col="red") x <. byrow=TRUE.simulate(x. rate=pm$rate[2]. "gamma". This function is called by the BaumWelch function. dgamma(x. .dthmm(NULL. ylab="Density") points(x. 1 . delta. pm. The Baum-Welch algorithm referred to in the HMM literature is a version of the EM algorithm. type="l". nrow=2) delta <. is the current estimate of the m × m transition probability matrix of the hidden Markov chain.c( .5). shape=c(3. . 3)) 15 x <.g.g. dnorm or dpois). 1.

Then. · · · . m and i = 1. Mstep an n × m matrix containing estimates of the conditional expectations.j] is uij = E[uij | X (n) ] = Pr{Ci = j | X (n) = x(n) } . Value A list object is returned with the following components. given the current model parameter estimates. Ci = k | X (n) = x(n) } . let vijk be one if Ci−1 = j and Ci = k . the current value of the log-likelihood.16 pm pn delta Estep is a list object containing the current (Markov dependent) parameter estimates associated with the distribution of the observed process (see dthmm). . where j.j. and v[i. See Also BaumWelch. n. Let X (n) contain the complete observed process. Further. References Cited references are listed on the HiddenMarkov manual page. is a list object containing the observation dependent parameter values associated with the distribution of the observed process (see dthmm). · · · . the returned value u[i. is the current estimate of the marginal probability distribution of the m hidden states. an n × m × m array containing estimates of the conditional expectations. u v LL Author(s) The algorithm has been taken from Zucchini (2005). k = 1.k] is vijk = E[vijk | X (n) ] = Pr{Ci−1 = j. Details Let uij be one if Ci = j and zero otherwise. See “Details”. See “Details”. and zero otherwise.

Then the (i. pn = NULL. Pi. Ci = j } and βij = Pr{Xi+1 = xi+1 . prob. If the distribution is speciﬁed as "wxyz" then a probability (or density) function called "dwxyz" should be available. Usage backward(x. fortran = TRUE) forwardback. else use the R code. if FALSE (default) calculate both forward and backward probabilities. delta. an n × m matrix containing the observation probabilities or densities (rows) by Markov state (columns). dnorm or dpois).only = FALSE) Arguments x Pi delta distn is a vector of length n containing the observed process. Xn = xn | Ci = j } .dthmm(Pi. fwd.forwardback 17 forwardback Forward and Backward Probabilities of DTHMM Description These functions calculate the forward and backward probabilities for a dthmm process. the diagonal elements of the product matrix AB are all the same. Pi. e. pm. Pi. · · · . delta. is a list object containing the current (Markov dependent) parameter estimates associated with the distribution of the observed process (see dthmm).g. pm. is a character string with the distribution name. logical. else calculate and return only forward probabilities and log-likelihood. distn. as deﬁned in MacDonald & Zucchini (1997.only Details Denote the n × m matrices containing the forward and backward probabilities as A and B .g. "norm" or "pois". . logical. in the standard R format (e. Further. pm. distn. fortran = TRUE. Xi = xi . is the marginal probability distribution of the m hidden states. is the m × m transition probability matrix of the hidden Markov chain. respectively. if TRUE (default) use the Fortran code. taking the value of the log-likelihood. pn = NULL) forward(x. distn. j )th elements are αij = Pr{X1 = x1 . · · · . delta. pm pn prob fortran fwd. pn = NULL) forwardback(x. Page 60). is a list object containing the observation dependent parameter values associated with the distribution of the observed process (see dthmm).

"pois". The functions backward and forward return a matrix containing the forward and backward (log) probabilities. logalpha and logbeta. "pois". 1/3. p <. .simulate(x. 1/3.dthmm(NULL. "norm". 1/2. 3) delta <. . . sd=p/3)) x <. Pi. Author(s) The algorithm has been taken from Zucchini (2005). . 1/3. . 2. . 1/3. See Also logLik Examples # Set Parameter Values . . Pi. 4. 5. sd=p/3)) . delta. list(mean=p.dthmm(NULL. byrow=TRUE. Pi. delta. 1. and the log-likelihood (LL). . list(lambda=p). References Cited references are listed on the HiddenMarkov manual page. . .c(1. nsim=1 ) y <. . discrete=TRUE) x <. 1/3. 1/3.18 Value forwardback The function forwardback returns a list with two matrices containing the forward and backward (log) probabilities. logalpha and logbeta. respectively.forwardback(x$x. respectively.forwardback(x$x. list(lambda=p)) # below should be same as LL for all time points print(log(diag(exp(y$logalpha) %*% t(exp(y$logbeta))))) print(y$LL) #-----Gaussian HMM ------ x <. 1/2. . 1/3. nrow=5) Pi <. #-----Poisson HMM ) ------ x <. 1/2). . Pi. "norm". list(mean=p. delta. nsim=1 ) y <. 1/3.c( . 1/3.matrix(c(1/2. delta.simulate(x.

This model can be simulated or ﬁtted to data by deﬁning the required model structure within an object of class "dthmm". and other model characteristics. General Documentation: topics summarising general structure are indexed under the keyword “documentation” in the Index. I have referred to some of these other functions as “2nd level” functions. All other functions in the package are called from within the above generic functions. for example see the topic mmpp-2nd-level-functions. These can be achieved by calling the appropriate generic function. Classes of Hidden Markov Models Analysed The classes of models currently ﬁtted by the package are listed below. Organisation of Topics in the Package Cited References: anywhere in the manual are only listed within this topic. Model Summary: can be extracted with the function summary. Prediction of the Markov States: can be performed by the function Viterbi. current parameter values. Markov Modulated Generalised Linear Model: is described under the topic mmglm1. Simulation of HMMs: can be performed by the function simulate. Log-Likelihood: can be calculated with the function logLik. Main Tasks Performed by the Package The main tasks performed by the package are listed below. Markov Modulated Poisson Process: is described under the topic mmpp. Each are deﬁned within an object that contains the data. or neglogLik together with nlm or optim (Newton type methods or grid searches). This model can be simulated or ﬁtted to data by deﬁning the required model structure within an object of class "mmpp". Discrete Time Hidden Markov Model: is described under the topic dthmm. Model Residuals: can be extracted with the function residuals.HiddenMarkov 19 # below should be same as LL for all time points print(log(diag(exp(y$logalpha) %*% t(exp(y$logbeta))))) print(y$LL) HiddenMarkov Overview of Package HiddenMarkov Description In this topic we give an overview of the package. . Parameter Estimation: can be performed by the functions BaumWelch (EM algorithm). and only need to be used if their output is speciﬁcally required.

L.R. Maximum likelihood from incomplete data via the EM algorithm (with discussion).2 6. DOI: http://dx. Statistics Research Associates.11 9/5. A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. L.org/1 . & Dieguez. T. DOI: http://dx.11 9/ LSP. DOI: http://dx. Springer-Verlag. & Lohmann. Y. 169–171. 373–376.B. Wellington. H. (1994). Chapman and Hall/CRC. 18626. Parameter estimation for Markov modulated Poisson processes. (1989). Chapman and Hall.doi.23 7/2285762 Dempster. W.org/1 . E. (1977). (1970). Hartley.J. J. Hidden Markov and Other Models for Discrete-valued Time Series.. I. J. McCullagh.J. Ephraim. P. DOI: http: //dx.L. 795–829.1 16/ S 166-5316( 3) 67-1 MacDonald.O.. Roberts. G.L. 174–194. Stochastic Models 10(4). Royal Statist. Society B 39(1). & Rubin. Soules. Sydney. London. Hidden Markov Models: Estimation and Control. URL: http://goo..E.doi. (2005). Harte. (1976). Biometrics 14(2).8717 9 Ryden. & Weiss. & Moore. (1997).P. The equivalence of generalized least squares and maximum likelihood estimates in the exponential family. Hidden Markov Models Short Course. Rabiner. 431–447. (2003). A. P.doi.doi. T. 164–171. DOI: http://dx.J. DOI: http://dx. E.1214/aoms/1177697196 Charnes. & Zucchini.gl/J6s7M. Computational Statistics & Data Analysis 21(4). A. & Nelder. J.doi.1 16/ 167-9473(95) 25-9 Zucchini. A tutorial on hidden Markov models and selected applications in speech recognition. Performance Evaluation 54(2). . Generalized Linear Models (2nd Edition).L. Macquarie University.A.org/1 . (1989). Boca Raton. (1996). 149–173. Maximum likelihood estimation from incomplete data.1 8 /153263494 88 7323 Ryden. Lindemann. Petrie.pdf”.doi. JASA 71(353). C. An EM algorithm for estimation in Markov-modulated Poisson processes. Frome. 1–38.B. 3–4 April 2005. Modeling IP trafﬁc using the batch Markovian arrival process.doi. W. N. N.. (2006).org/1 . 257–286..org/1 . (1958). D. D.23 7/2527783 Klemm. L.. DOI: http://dx. A. Proceedings of the IEEE 77(2).20 Acknowledgement HiddenMarkov Many of the functions contained in the package are based on those of Walter Zucchini (2005). References Baum. & Yu. M.M. IEEE Signal Processing Letters 13(6). (2006). New York. Mathematical Background Notes for Package “HiddenMarkov”.. Laird.org/1 . W. Annals of Mathematical Statistics 41(1).org/1 . DOI: http://dx. Aggoun. (1994). T. Elliott.doi. On Ryden’s EM algorithm for estimating MMPPs. ﬁle “notes.org/1 . R.

. Value Returns the value of the log-likelihood. an object with class "dthmm".mmpp. logLik.. ).) ## S3 method for class ’mmpp’ logLik(object. respectively..) Arguments object fortran .dthmm. logical.) ## S3 method for class ’mmglmlong1’ logLik(object. 1/3. . logLik.mmglmlong1 or logLik. This enables the user to change parameter or data values within the object and recalculate the log-likelihood for the revised conﬁguration. . Details The methods provided here will always recalculate the log-likelihood even if it is already contained within the object. nrow=3) x <. fortran=TRUE.. on the R command line. fortran=TRUE.1. byrow=TRUE. The code for the methods "dthmm". "mmglm ". "mmglm1". Examples Pi <.. .mmglm . c( . 1/2. 1/3. "mmglmlong1" and "mmpp" can be viewed by typing logLik.) ## S3 method for class ’mmglm ’ logLik(object. ...dthmm(NULL. "mmglm ". Pi. 1/2. "mmglmlong1" or "mmpp". else use the R code.. fortran=TRUE. logLik. .mmglm1. fortran=TRUE. ..) ## S3 method for class ’mmglm1’ logLik(object. if TRUE (default) use the Fortran code. Usage ## S3 method for class ’dthmm’ logLik(object. .logLik 21 logLik Log Likelihood of Hidden Markov Model Description Provides methods for the generic function logLik. 1/2)... fortran=TRUE. other arguments. . 1/3..matrix(c(1/2. "norm". "mmglm1".

nrow=nrow(Pi). x$mc[-1]) rowtotal <. Value A list object with class "mchain". Usage mchain(x.1)) # Simulate some data x <.vector(1/rowtotal)) %*% estPi print(estPi) . See “Details” below.2. .estPi %*% matrix(1. nrow=2) # Create a Markov chain object with no data (NULL) x <.diag(as. is logical. containing the above arguments as named components. It does not simulate data. Pi.simulate(x.3. This is used when there are no data and a process is to be simulated. Examples Pi <. nsim=2 ) # estimate transition probabilities estPi <. ncol=1) estPi <. c( .8. . . is the m × m transition probability matrix of the Markov chain. nonstat = TRUE) Arguments x Pi delta nonstat is a vector of length n containing the observed process. TRUE if the homogeneous Markov chain is assumed to be non-stationary. 3). Pi. byrow=TRUE. nsim=1 print(logLik(x)) ) .5. else it is speciﬁed as NULL. default. is the marginal probability distribution of the m state Markov chain at the ﬁrst time point.mchain(NULL.simulate(x. 6. delta.22 list(mean=c(1.table(x$mc[-length(x$mc)]. x <. 1))) mchain mchain Markov Chain Object Description Creates a Markov chain object with class "mchain".matrix(c( .7). sd=c(1.

beta. see “Under Development” below. "probit" or "cloglog". sigma = NA. meaning that the data will be added later (e. Binomial or Gamma models with the standard link functions provided by glm. delta. glmformula = formula(y~x1). "poisson". nonstat = TRUE. msg = TRUE) mmglm1(y. msg = TRUE) mmglmlong1(y. the link function. See Details below for the binomial case. size = NA. simulated). character string. If family == "binomial". "Gamma" or "binomial". delta. character string. In the case of the simple regression model of mmglm . Pi. msg = TRUE) Arguments x a dataframe containing the observed variable (i. Xdesign. sigma = NA. is the marginal probability distribution of the m hidden states at the ﬁrst time point. then one of "logit". glmfamily. link. family.e. is the m × m transition probability matrix of the hidden Markov chain. a family object deﬁning the glm family and link function. These functions are in development and may change. "inverse" or "log". the GLM family. nonstat = TRUE. The functions mmglm1 and mmglmlong1 do not have these naming restrictions. Pi. beta. It is currently restricted to Gaussian. delta. Poisson. beta. nonstat = TRUE. where p is the number of parameters in the linear predictor. used as initial values during estimation. and n is the number of observations for each subject (assumed to be the same). y Pi delta family link glmfamily Xdesign beta . Xdesign. else one of "identity". x could be speciﬁed as NULL. one of "gaussian". Pi. a nN × p design matrix.mmglm 23 mmglm Markov Modulated GLM Object Description These functions create Markov modulated generalised linear model objects. the response variable in the generalised linear model) and the covariate. In the case of binomial. Usage mmglm (x. longitude. response variable. size = NA. sigma = NA. a p × m matrix containing parameter values. p = 2. Alternatively. The function mmglm requires that the response variable be named y and the covariate x1. N is the number of subjects (N = 1 in mmglm1).g. p is the number of columns of Xdesign. numeric vector. In the case of mmglm1 and mmglmlong1. it is the number of successes (see argument size). glmfamily.

the model formula is currently implicitly deﬁned through Xdesign.1 and 1. If family == "binomial" then the response variable y is interpreted as the number of successes. They differ slightly in the mechanism. Eq. and ordered by “time” within subject. thus causing the changes in the observed means. Note that the functions mmglm1 and mmglmlong1 do not have this restriction. The dataframe x must also contain a variable called size being the number of Bernoulli trials. 2. default. 2. It is the same length as y.24 glmformula mmglm the only model formula for mmglm is y~x1. the number of observations per individual is assumed to be the same. in those cases.1). a vector the same length as y identifying the subject for each observation. 1976. Models of the form given by mmglm1 are assumed to have one time series. The linear model is assumed to be a generalised linear model as described by McCullagh & Nelder (1989). This can also be maximised using iterative least squares by passing these additional weights (Markov state probabilities) into the glm function. one would be interested in the asymptotic properties of the parameter estimates as the series length gets very large. is number of Bernoulli trials in each observation when the glm family is binomial. When the density function of the response variable is from the exponential family (Charnes et al. Note that in the longitudinal case. is logical. but it is the parameters in the linear predictor that are being modulated by the hidden Markov process.4) can be maximised by using iterative weighted least squares (Charnes et al. in that both classes have the distribution of the observed variable being “modulated” by the changing hidden Markov state. is logical. This is the method used by the R function glm. then it is the variance. In the longitudinal case (mmglmlong1). The different format here allows one to specify the number of Bernoulli trials only so that the number of successes or failures can be simulated later. respectively. This is different to the format used by the function glm where y would be a matrix with two columns containing the number of successes and failures. Eq. The model formula for mmglm1 is deﬁned implicitly through the structure of the speciﬁed design matrix. and hence interest is in the asymptotic properties as the number of individuals becomes large. the series of observations per individual is probably very small (< 10). 2. 1976.3). TRUE if the homogeneous Markov chain is assumed to be non-stationary. sigma nonstat longitude size msg Details This family of models is similar in nature to those of the class dthmm. if family == "gaussian". The model mmglmlong1 is similar to mmglm1 but can be applied to longitudinal observations. It is of length m for each Markov state. then it is 1/sqrt(shape). 1976. needs to be maximised.2). The function mmglm is a very simple trivial case where the linear predictor is of the form β0 + β1 x1 . The observations must be grouped by subject. 1. In this Markov modulated version of the model. This family assumes that the mean of the observation distribution can be expressed as a linear model of other known variables. This is simply the sum of the individual log-likelihood contributions of the response variable weighted by the Markov state probabilities calculated in the E-step. and from a theoretical perspective. The responses are assumed to be conditionally independent given the value of the Markov chain and the explanatory variables in the linear predictor. suppress messages about developmental status. The version mmglm1 does not have this limitation. as given in Harte (2006. if family == "Gamma". the third term of the complete data log-likelihood. . Sec. the likelihood function (Charnes et al. however. Eq.

1. .matrix(c( .simulate(x. byrow=TRUE. byrow=TRUE. Examples #-------------------------------------------------------# Gaussian with identity link function # using mmglm delta <. 1) points(z..8.1 x <.1) Pi <.1. . 2)) n <. beta=beta. . delta. sigma=c(1. . nrow=2) beta <.mmglm (NULL. .BaumWelch(x.matrix(c( . there was only one function called mmglm.2.c( .7). References Cited references are listed on the HiddenMarkov manual page.3).seq(-3.3. family="gaussian". containing the above arguments as named components.mmglm Value A list object with class "mmglm ". dnorm(z)*n*(w$breaks[2]-w$breaks[1]). 5.1. link="identity". This has been renamed to mmglm . The name mmglm has been reserved for the ﬁnal stable version. Under Development 25 These functions are still being developed. col="red". type="l") box() print(summary(y)) print(logLik(y)) #-------------------------------------------------------- . bwcontrol(maxiter=2)) w <. . The most recent version is mmglm1 along with mmglmlong1 which has ﬂexibility to include longitudinal data. In previous releases of the package (< 1. ).hist(residuals(y)) z <. nsim=n. Pi. 3. nrow=2) x <. Further development versions will be numbered sequentially. at which point the numbered versions will become deprecated. seed=1 ) # Increase maxiter below to achieve convergence # Has been restricted to minimise time of package checks y <.

.2.model. .matrix(glmformula. length. 1) beta <.simulate(y. "red") colnum <.8. nrow=2) delta <.9. delta.7).3. Xdesign. "green".c(1.2. data=data) # --. . seed=5) # --. 1.26 # # Gaussian with log link function using mmglm1 mmglm n <.1 # the range of x needs changing according to the glmfamily x <. glmfamily.2. ncol=ncol(Pi). n/3+1)[1:n] .rep(1:3. sd <. not required in formula # design matrix only depends on RHS of formula glmformula <. Pi. -1.5.2). sigma=sd) y <. byrow=TRUE.Estimation --# Increase maxiter below to achieve convergence # Has been restricted to minimise time of package checks tmp <. maxiter=2)) print(summary(tmp)) ) #------------------------------------------------# Binomial with logit link function # using mmglm1 # n = series length . beta.. 2.matrix(c(-1. colour=colour[colnum+1]) # will simulate response variable.out=n) colour <. bwcontrol(posdiff=FALSE.matrix(c( .c("blue".8.mmglm1(NULL.c(1.seq(. byrow=TRUE) y <.Parameter Values and Simulation --Pi <. 2. 3.8. -1.BaumWelch(y.frame(x=x.8. 1. -2. . nrow=ncol(Xdesign).formula( ~ x + I(x^2) + colour) glmfamily <.gaussian(link="log") Xdesign <.data. 2.1 data <.

Parameter Values and Simulation --Pi <. n)) # each element of y$y is the number of successes in 1 y <. "red") colnum <. sigma=sd. nrow=ncol(Xdesign).data.formula( ~ x + I(x^2) + colour) glmfamily <.out=n) colour <.Estimation --# Increase maxiter below to achieve convergence # Has been restricted to minimise time of package checks tmp <.5.3. colour=colour[colnum+1]) glmformula <.8.simulate(y. .1 data <. 2. longitudinal data # using mmglmlong1 # n = series length for each subject # N = number of subjects n <. ) 27 beta <.model. bwcontrol(posdiff=FALSE. nrow=2) delta <.c(1. Xdesign. size=rep(1 . delta. 1.seq(-1. seed=5) Bernoulli trials # --. length. -2.BaumWelch(y. . ncol=ncol(Pi).2. 2.8. "green".rep(1:3.8.matrix(c( .matrix(c(-1. n/3+1)[1:n] . glmfamily.1 . .mmglm n <. byrow=TRUE) y <.2.c("blue". Pi. 1.7). 2.5 N <. data=data) # --. 3.8.mmglm1(NULL.1 # the range of x need changing according to the glmfamily x <.binomial(link="logit") Xdesign <. beta.matrix(glmformula. maxiter=2)) print(summary(tmp)) #------------------------------------------------# Gaussian with log link function. byrow=TRUE. -1. . -1.frame(x=x.2).

. byrow=TRUE.2. 2. 3. "horoeka.NULL for (i in 1:N) Xdesign <.5.1 data <. ncol=ncol(Pi). not required in formula # design matrix only depends on RHS of formula glmformula <. sd <. 2. -1. -2. . 1. length. glmfamily. data=data) # multiple subjects Xdesign <.8. nrow=2) delta <.2. "green".8. 1.frame(x=x. each=n)) y <. Xdesign ) # --.Estimation --# # # Note: the "Not run" blocks below are not run during package checks as the makeSOCKcluster definition is specific to my network.simulate(y. sigma=sd.NULL ## Not run: if (require(snow)){ cl <.localdomain". .mmglmlong1(NULL. Xdesign.7).formula( ~ x + I(x^2) + colour) glmfamily <. n/3+1)[1:n] . 2.Parameter Values and Simulation --Pi <.8. delta.rbind(Xdesign.2).localdomain". "localhost")) } ## End(Not run) .3.model.out=n) colour <.c( .seq(. longitude=rep(1:N. beta.9.matrix(c(-1..8.c("blue".rep(1:3. "horoeka. seed=5) # --.5. Pi. "red") colnum <.matrix(glmformula. colour=colour[colnum+1]) # will simulate response variable.makeSOCKcluster(c("localhost". .matrix(c( . 1) beta <.data.c(1.28 # the range of x need changing according to the glmfamily x <. nrow=ncol(Xdesign).2. byrow=TRUE) y <.5) mmglm cl <. modify accordingly if you want parallel processing. .gaussian(link="log") Xdesign <. -1.

8.2. SNOWcluster=cl) ## Not run: if (!is.c( .matrix(c(-1.7). .BaumWelch(y.matrix(glmformula.5.rbind(Xdesign.model. data=data) # multiple subjects Xdesign <. colour=colour[colnum+1]) glmformula <.8.matrix(c( . 2. length.1 # the range of x need changing according to the glmfamily x <.c("blue". Xdesign ) # --.8.1 N <.1 data <.out=n) colour <. -1. longitudinal data # using mmglmlong1 # n = series length for each subject # N = number of subjects n <.Parameter Values and Simulation --Pi <. 1. 3. -2. .NULL for (i in 1:N) Xdesign <.data. . "green". n/3+1)[1:n] . -1.2. bwcontrol(posdiff=FALSE.formula( ~ x + I(x^2) + colour) glmfamily <. . .frame(x=x. maxiter=2). byrow=TRUE.binomial(link="logit") Xdesign <.mmglm 29 # Increase maxiter below to achieve convergence # Has been restricted to minimise time of package checks tmp <.5.8. "red") colnum <. nrow=2) delta <. .5) beta <.rep(1:3.seq(-1.null(cl)){ stopCluster(cl) rm(cl) } ## End(Not run) print(summary(tmp)) #------------------------------------------------# Binomial with logit link function. 1.3.

Usage Estep.Estimation --# Increase maxiter below to achieve convergence # Has been restricted to minimise time of package checks tmp <. lambda. mmpp Markov Modulated Poisson Process Object Description Creates a Markov modulated Poisson process model object with class "mmpp".mmglm1(object. maxiter=1)) print(summary(tmp)) mmpp mmglm-2nd-level-functions Markov Modulated Generalised Linear Model . size=rep(2 . N*n)) y <. logical. nrow=ncol(Xdesign).mmglm1(object. Usage mmpp(tau.simulate(y.BaumWelch(y. Pi. fortran=TRUE) Mstep.2). 2. bwcontrol(posdiff=FALSE. longitude=rep(1:N. if TRUE (default) use the Fortran code in the forward-backward equations. seed=5) # --.mmglmlong1(NULL. else use the R code. byrow=TRUE) y <. sigma=sd. nonstat = TRUE) . each=n). beta.30 2. Q.2nd Level Functions Description These functions will generally not be called directly by the user. Xdesign. u) Arguments object u fortran an object with class "mmglm" or "mmglmlong" a matrix of weights by Markov state and observation used in ﬁtting the generalised linear model. glmfamily. delta. delta. ncol=ncol(Pi).

state transitions of the Markov process do not necessarily coincide with Poisson events. tau could be speciﬁed as NULL. the Poisson process is homogeneous (constant rate parameter). containing the above arguments as named components. nrow=2)/1 # NULL indicates that we have no data at this point x <. Within each state. For more details. seed=5) . 2. 1). nsim=5 y <. Q. see Ryden (1996).g. is logical. is the marginal probability distribution of the m hidden states at time zero. The rate parameter of the Poisson process (lambda) is determined by the current state of the hidden Markov process.simulate(x. Examples Q <. byrow=TRUE. The initial state probabilities (at time zero) are speciﬁed by delta and the transition rates by the Q matrix. the inﬁnitesimal generator matrix of the Markov process. -1). 1)) x <. Note that the ﬁrst event is at time zero. Q delta lambda nonstat Details The Markov modulated Poisson process is based on a hidden Markov process in continuous time. default.matrix(c(-2. lambda=c(5. Value A list object with class "mmpp". delta=c( .mmpp Arguments tau 31 vector containing the event times. Alternatively.BaumWelch(x) print(summary(y)) # log-likelihood using initial parameter values print(logLik(x)) # log-likelihood using estimated parameter values print(logLik(y)) . meaning that the data will be added later (e. References Cited references are listed on the HiddenMarkov manual page. A Poisson event is assumed to occur at time zero and at the end of the observation period. 1. a vector containing the Poisson rates. however.mmpp(NULL. TRUE if the homogeneous Markov process is assumed to be nonstationary. simulated).

lambda. logical. if TRUE (default) use the Fortran code. . Note that the ﬁrst event is at time zero. logical. References Cited references are listed on the HiddenMarkov manual page.32 Mstep mmpp-2nd-level-functions Markov Modulated Poisson Process . The Baum-Welch algorithm used in the HMM literature is a version of the EM algorithm. is the marginal probability distribution of the m hidden states at time zero. fortran = TRUE) Arguments tau Q lambda delta fortran fwd. Details These functions use the algorithm given by Ryden (1996) based on eigenvalue decompositions.2nd Level Functions Description These functions have not been put into a generic format.only vector containing the interevent times.mmpp(tau. a vector containing the Poisson rates. This function is called by the BaumWelch function.only = FALSE) Estep. Mstep M-Step of EM Algorithm for DTHMM Description Performs the maximisation step of the EM algorithm for a dthmm process. Q. delta. Usage forwardback. fwd. fortran = TRUE. the inﬁnitesimal generator matrix of the Markov process. if FALSE (default) calculate both forward and backward probabilities. else calculate and return only forward probabilities and log-likelihood. Q. lambda. delta. else use the R code. but are called by generic functions.mmpp(tau.

Normal and Poisson distributions. cond.exp. Since scale is redundant.binom(x. pm. is a list object containing the observation dependent parameter values associated with the distribution of the observed process (see dthmm).glm(x. pn. pn) Arguments x cond family link is a vector of length n containing the observed process. we only use shape and rate. However. pn. Mstep. maxiter = 2 ) Mstep. pm. shape2 and ncp. The size argument of the Binomial Distribution should always be speciﬁed in the pn argument (see topic dthmm). maxiter = 2 ) Mstep. is an object created by Estep. Exponential. Gamma. pm. i. Mstep.beta The R functions for the Beta Distribution have arguments shape1. else one of "identity".logis.exp(x. Different combinations of "shape1" and "shape2" can be “time” dependent (speciﬁed in pn) and Markov dependent (speciﬁed in pm). family. "inverse" or "log". Each function has the same argument list. We only use shape1 and shape2. If family == "Binomial". pn) Mstep. pm.norm and Mstep. Speciﬁc notes for some follow. Log Normal. Mstep. However.pois perform the maximisation step for the Beta. is a list object containing the current (Markov dependent) parameter estimates associated with the distribution of the observed process (see dthmm). Mstep. Logistic. cond. pn) Mstep. Mstep. Binomial. These are only used as initial values if the algorithm within the Mstep is iterative. pm. Mstep. respectively.norm(x. pm.binom. "Gamma" or "binomial". pm pn maxiter Details The functions Mstep. the GLM family. "probit" or "cloglog". each should only be speciﬁed in one (see topic dthmm). "poisson".lnorm(x. Mstep. cond. cond.logis(x.e. cond. pn) Mstep. Mstep. even if speciﬁc arguments are redundant.lnorm. one of "gaussian". pm. pn. cond. cond.gamma The R functions for the GammaDist have arguments shape. maxiter = 2 ) Mstep. 33 character string. .gamma(x.beta(x. maximum number of Newton-Raphson iterations.gamma. character string.pois(x. pn) Mstep.beta.Mstep Usage Mstep. because the functions are called from within other functions in a generic like manner.binom The R functions for the Binomial Distribution have arguments size and prob. link) Mstep. pm. ncp is assumed to be zero. each should only be speciﬁed in one (see topic dthmm). then one of "logit". pn. pm. cond. rate and scale. Mstep. Different combinations of "shape" and "rate" can be “time” dependent (speciﬁed in pn) and Markov dependent (speciﬁed in pm). the link function. cond.

logis The R functions for the Logistic Distribution have arguments location and scale. lower. Different combinations of "mean" and "sd" can be “time” dependent (speciﬁed in pn) and Markov dependent (speciﬁed in pm). lower. it will expect to ﬁnd functions pxyz. log. sd = 1) rbeta(n.xyz required for the M-step in the EM algorithm. sd = 1. mean = . shape2. log = FALSE) rnorm(n. size.norm The R functions for the Normal Distribution have arguments mean and sd. However. size.p = FALSE) pbinom(q. shape1. prob) The functions pxyz (distribution function). each should only be speciﬁed in one (see topic dthmm). each should only be speciﬁed in one (see topic dthmm). shape2. lambda) rbinom(n. ncp = . and rxyz. And if simulations are to be performed. prob. Note that the probability functions all have a ﬁrst argument of q. In this section we describe the required format for the distribution related functions pxyz. dxyz (density) and rxyz (random number generator) must be consistent with the conventions used in the above examples. shape1. Different combinations of "location" and "scale" can be “time” dependent (speciﬁed in pn) and Markov dependent (speciﬁed in pm).xyz. prob. log = FALSE) dpois(x. it will require rxyz. The arguments in the middle are peculiar to the given distribution. ncp = .tail = TRUE.tail = TRUE.p = FALSE) dnorm(x. with the same defaults. each should only be speciﬁed in one (see topic dthmm). log. Note that the observed process x is univariate. lambda. Mstep. For example. log = FALSE) dbeta(x. dxyz. However.lnorm The R functions for the Lognormal Distribution have arguments meanlog and sdlog. and for the function Mstep. log = FALSE) dbinom(x. ncp = ) rpois(n. sd = 1.p = FALSE) ppois(q. the density functions have a ﬁrst argument of x. Consider the examples below of distribution related functions and their arguments.34 Mstep Mstep. if the argument distn in dthmm is "xyz". lambda. and the last two arguments are all the same. Similarly. log. The software will deduce the . mean = . with the same default values. Modiﬁcations and Extensions The HiddenMarkov package calls the associated functions belonging to the speciﬁed probability distribution in a generic way. lower. and Mstep. and the last argument is the same. lower. Value A list object with the same structure as pm (see topic dthmm).tail = TRUE. dxyz. Mstep. shape2. one argument for each distribution parameter. pnorm(q. Different combinations of "meanlog" and "sdlog" can be “time” dependent (speciﬁed in pn) and Markov dependent (speciﬁed in pm).p = FALSE) pbeta(q.tail = TRUE. shape1. mean = . log. However. size.

deﬁne them as follows: rxyz <. Unfortunately. for further mathematical detail). pm. they both need to be speciﬁed in the pm argument of dthmm. if x is a vector. when we calculate the distributional parameters (like mu and sd in Mstep. "sd"))). The simple work-around here is to deﬁne a new distribution. are vectors of the same length. Further. hence.xyz. then the returned value is the same length as mean and is the density of x evaluated at the corresponding pairs of values of mean and sd. and mean and sd are scalars. category probabilities) are characterised as one vector rather than a sequence of separate function arguments. Further. and we only want to estimate the standard deviations. 2006. Alternatively. The functions pxyz and dxyz are used in the forward and backward equations. To achieve what we want. From the function Mstep. and it will call these functions assuming that their argument list is consistent with those described above. However. replace the line where the mean is estimated to mean <. For example. pn). we need to modify this function. The estimation of the distribution speciﬁc parameters takes place in the M-step. In this case it is relatively easy (see code in “Examples” below). say sd. However. This procedure can be shown to give the maximum likelihood estimates of mu and sd.norm) we calculate weighted sums using the probabilities cond$u.e. Now consider a situation where we want to modify the way in which a normal distribution is ﬁtted. i. See Mstep.rnorm dxyz <. thus the returned value is the same length as x. If we have distn="xyz". mean. one cannot easily modify the functions in a package namespace. the distribution related functions are just the same as those for the normal distribution. The other calculation. so the vector is used to characterise the multivariate character of the random variable rather than multiple univariate realisations. leave it as was initially speciﬁed. i. Note that the functions for the Multinomial distribution do not have this behaviour. see for example Mstep. pxyz and rxyz must also behave in the same vectorised way as dnorm. The values of cond$u are essentially probabilities that the process belongs to the given Markov state. where the ith value is the density at x[i] with mean mean and standard deviation sd[i]. say "xyz". The parameters that are estimated within this function are named in a consistent way with those that are deﬁned within the dthmm arguments pm and pn. and mu is a scalar. the distribution parameters (i.Mstep 35 distribution argument names from what is speciﬁed in pm and pn. then dnorm(x. Since both parameters are Markov dependent. take the code under the section if (all(nms==c("mean". ∂ 2 log L/(∂α1 ∂α2 ) are nonzero. if x is a scalar and mean and sd are vectors.norm. sd) calculates the density for each element in x using the scalar values of mean and sd. The functions dxyz. Say we know the Markov dependent means. then deﬁne a new function with the above changes called Mstep.e. both of the parameters are Markov dependent. Notice that the estimates of mean and sd in Mstep. i.dnorm .beta for further details.pm$mean. then there should be a function called Mstep.norm. it should have arguments (x. Then the returned vector will be of the same length as x. hence.norm are weighted by cond$u. in this case Mstep. The calculations for cond$u are performed in the E-step.norm. is the maximisation in the M-step. cond. One needs to take a little more care when dealing with a distributions like the beta. The third possibility is that x and one of the distribution parameters. where the cross derivatives of the log likelihood between the parameters.e. and utilise the distribution related functions "dxyz" and "pxyz".e. both of the same length. Here the vector x contains the counts for one multinomial experiment.xyz. and hence a similar weighting should be used for the distribution "xyz" (see Harte. that is speciﬁc to the chosen distribution.

Estep Examples # Fit logistic distribution to a simple single sample # Simulate data n <.-2 scale <. pn){ # this function is a modified version of Mstep.length(x) m <.rlogis(n.logis(x. Note Mstep The Mstep functions can be used to estimate the maximum likelihood parameters from a simple sample.36 pxyz <. "sd"))){ mean <.5 x <. ncol=m) matrix(mean.1. pm. cond.matrix(rep(1/n.sort(names(pm)) n <. # One way is to define a new distribution called "xyz".2 location <. location.NULL cond$u <. See the example below where this is done for the logistic distribution. MARGIN=2.norm # here the mean is fixed to the values specified in pm$mean nms <.pnorm qxyz <. cond. say. n). but the means # are known and sd requires estimation (See "Modifications" above).pm$mean sd <.sqrt(apply((matrix(x. nrow=n.function(x. See Also BaumWelch. ncol=1) # calculate maximum likelihood parameter estimates # start iterations at values used to simulate print(Mstep. scale=scale))) #----------------------------------------------------# Example with Gaussian Observations # Assume that both mean and sd are Markov dependent. Mstep. pm=list(location=location.qnorm See the 2nd example below for full details. scale) # give each datum equal weight cond <.xyz <. ncol=m.ncol(cond$u) if (all(nms==c("mean". nrow=n. byrow=TRUE))^2 * cond$u. .

sd=p2) x <. 1/2). . discrete=FALSE) 37 neglogLik Negative Log-Likelihood Description Calculates the log-likelihood multiplied by negative one.c(1. "mmglm ". . 1.matrix(c(1/2. n.qnorm Pi <. Usage neglogLik(params. pm. 3) p2 <. sd=sd)) } } # define the distribution related functions for "xyz" # they are the same as those for the Normal distribution rxyz <. x <.5. nrow=3) p1 <.1 pm <. or "mmpp". 6.rnorm dxyz <. an object of class "dthmm". 1/3. 1.dthmm(NULL. . It is in a format that can be used with the functions nlm and optim. seed=5) # use above parameter values as initial values y <.5) n <. byrow=TRUE.BaumWelch(x) print(y$delta) print(y$pm) print(y$Pi) ). . 1/3.list(mean=p1.c( . Pi. FUN=sum)) return(list(mean=mean. pmap) Arguments params object a vector of revised parameter values. object.neglogLik FUN=sum)/apply(cond$u.simulate(x.pnorm qxyz <. 1/3. MARGIN=2.dnorm pxyz <. 1/2. 1/2. "xyz". providing an alternative to the BaumWelch algorithm for maximum likelihood parameter estimation. c( .

This provides alternative methods of estimating the maximum likelihood parameter values. . Other parameters will then be ﬁxed at the values stored in the model object.1. Some functions are provided in the topic Transform-Parameters that may provide useful components within the user provided function assigned to pmap. byrow=TRUE. See “Examples” below. if a particular parameter must be positive. Similarly a modiﬁed logit like transform can be used to ensure that parameter values remain within a ﬁxed interval with ﬁnite boundaries. The EM algorithm is relatively stable when starting from poor initial values but convergence is very slow in close proximity to the solution. and let Ψ denote the parameter sub-space (Ψ ⊆ Θ) over which the likelihood function is to be maximised. It can also be used to restrict estimation to a subset of parameters. 1. to that of the EM algorithm provided by BaumWelch. Newton type methods are very sensitive to initial conditions but converge much more quickly in close proximity to the solution. .c( . optim.2. The mapping function assigned to pmap can also be made to impose restrictions on the domain of the parameter space Ψ so that the minimiser cannot jump to values such that Ψ ⊆ Θ.8. Examples of these situations can be found in the “Examples” below. Details This function is in a format that can be used with the two functions nlm and optim (see Examples below). For example. nrow=3) Pi <.38 pmap neglogLik a user provided function mapping the revised (or restricted) parameter values p into the appropriate locations in object. . This suggests initially using the EM algorithm and then switching to Newton type methods (see Examples below). The argument params contains values in Ψ. . .3. Value Value of the log-likelihood times negative one.1. with the model parameter being the exponential of this transformed parameter.6.matrix(c( . including Newton type methods and grid searches. and pmap is assigned a function that maps these values into the model parameter space Θ. . BaumWelch Examples # Example where the Markov chain is assumed to be stationary . See Also nlm. ) .3.1.5). Transform-Parameters. # start simulation in state 2 delta <. The maximisation of the model likelihood function can be restricted to be over a subset of the model parameters. one can work with a transformed parameter that can take any value on the real line. . Let Θ denote the model parameter space.

dthmm(NULL. print. ) x <. "exp". p){ # rows must sum to one invlogit <. list(rate=c(5.function(y. 1. Pi. . 1))) x <. nsim=5 . Pi and rate m <.1. byrow=TRUE.3. object=x. Pi.BaumWelch(x. iterlim=2) x2 <. seed=5) .8.dthmm(NULL. delta <.5).compdelta(Pi) return(y) } p <. seed=5) offdiagmap <.matrix(c( . p){ # maps vector back to delta. .allmap(x.simulate(x.level=2.function(y. gradtol= .c(Pi2vector(x$Pi). z$estimate) # compare parameter estimates print(summary(x)) print(summary(x1)) print(summary(x2)) #-------------------------------------------------------# Estimate only the off diagonal elements in the matrix Pi # Hold all others as in the simulation # # # This function maps the changeable parameters into the dthmm object . delta. tol=1e-5)) # Exact solution using nlm allmap <.function(eta) . delta. nsim=5 .neglogLik x <.1. x <. nonstat=FALSE) 39 # Approximate solution using BaumWelch x1 <. . .1.6. . p.simulate(x.sqrt(length(p)) y$Pi <.3. . log(x$pm$rate)) # Increase iterlim below to achieve convergence # Has been restricted to minimise time of package checks z <. "exp".vector2Pi(p[1:(m*(m-1))]) y$pm$rate <. 1.2)).done within the function neglogLik The logit-like transform removes boundaries . .exp(p[(m^2-m+1):(m^2)]) y$delta <.nlm(neglogLik. 3.c( .2. list(rate=c(5. control=bwcontrol(maxiter=1 . pmap=allmap. nrow=3) Pi <. 2.

1] <. ).offdiagmap(x.40 exp(eta)/(1+exp(eta)) y$Pi[1.simulate(x. object=x.nlm(neglogLik. 3. 5.1-y$Pi[3. Q.3] return(y) } z <.3])*invlogit(p[3]) y$Pi[3.1])*invlogit(p[1]) y$Pi[1. -7). nsim=5 . 1. lambda) x <.mmpp(NULL. control=bwcontrol(maxiter=1 .1]-y$Pi[1.c(Q2vector(x1$Q).(1-y$Pi[1.2] <.1-y$Pi[1. fixed delta p <. byrow=TRUE. print.2] y$Pi[3.3] <.2] <. seed=5) allmap <.(1-y$Pi[3. delta. 1.vector2Q(p[1:(m*(m-1))]) y$lambda <.exp(p[(m^2-m+1):(m^2)]) return(y) } # Start by using the EM algorithm x1 <. 3.c( .BaumWelch(x. ) # simulate some data x <.1]-y$Pi[2.(1-y$Pi[2.sqrt(length(p)) y$Q <.1]-y$Pi[3. 1)) # use above as initial values for the nlm function # map parameters to a single vector.3] <. 2. z$estimate) # print revised values of Pi print(x1$Pi) # print log-likelihood using original and revised values print(logLik(x)) print(logLik(x1)) #-------------------------------------------------------# Fully estimate both Q and lambda for an MMPP Process Q <.level=2.1] <. .2] y$Pi[2. 3.1-y$Pi[2.matrix(c(-8. gradtol= . log(x1$lambda)) # Complete estimation using nlm neglogLik . nrow=3)/25 lambda <. 1) delta <. c( . 1) # x1 contains revised parameter estimates x1 <. -4. p){ # maps vector back to Pi and rate m <. pmap=offdiagmap. 5. tol= .function(y.2])*invlogit(p[2]) y$Pi[2.c(5.

delta. is the marginal probability distribution of the m hidden states at the ﬁrst time point. gradtol= . Usage probhmm(logalpha. · · · . . an n × m matrix containing the logarithm of the backward probabilities. This R function calculates the distribution function for each point Xi for i = 1. note the subtraction of the mean. iterlim=5) # mmpp object with estimated parameter values from nlm x2 <. z$estimate) # compare log-likelihoods print(logLik(x)) print(logLik(x1)) print(logLik(x2)) # print final parameter estimates print(summary(x2)) 41 probhmm Conditional Distribution Function of DTHMM Description Calculates the distribution function at each point for a dthmm process given the complete observed process except the given point. p. object=x. an n × m matrix containing the logarithm of the forward probabilities.nlm(neglogLik. Pi. an n × m matrix where the (i. This is done by using the forward and backward probabilities before and after the ith point. is the m × m transition probability matrix of the hidden Markov chain.allmap(x. This is to stop underﬂow when the exponential is taken. n. respectively. print. The distribution function at the point Xi is Pr{Xi ≤ xi | X (−i) = x(−i) } . cumprob) Arguments logalpha logbeta Pi delta cumprob Details Let X (−i) denote the entire process.level=2. pmap=allmap. logbeta. Removal of the mean is automatically compensated for by the fact that the same factor is removed in both the numerator and denominator. 1. In the programming code. k )th element is Pr{Xi ≤ xi | Ck = ck }.probhmm # Increase iterlim below to achieve convergence # Has been restricted to minimise time of package checks z <. except with the point Xi removed.

.) ## S3 method for class residuals(object. . The code for the methods "dthmm".42 Value A vector containing the probability. an object with class dthmm. binomial and Poisson) where the observed count is close to or on the boundary of the domain (e. .. A continuity adjustment is made when the observed distribution is discrete...) ## S3 method for class residuals(object.dthmm. Details The calculated residuals are pseudo residuals. There is currently no method for objects of class "mmpp".mmglmlong1. Under satisfactory conditions they have an approximate standard normal distribution. "mmglm ". residuals. see the Poisson example below. Hence by applying the function qnorm. See Also residuals residuals residuals Residuals of Hidden Markov Model Description Provides methods for the generic function residuals. mmglm1 or mmglmlong1.. residuals. the resultant “residuals” should have an approximate standard normal distribution.. "mmglm1" or "mmglmlong1" can be viewed by typing residuals.mmglm1 or residuals. the returned values should be approximately uniformly distributed. . other arguments.) Arguments object .g..mmglm . In the case of count distributions (e. Initially the function probhmm is called. the pseudo residuals will give a very poor indication of the models goodness of ﬁt.. on the R command line. respectively. or binomial count is “n”).) ## S3 method for class residuals(object. Usage ## S3 method for class residuals(object. ’dthmm’ ’mmglm ’ ’mmglm1’ ’mmglmlong1’ . binomial or Poisson count is zero.. mmglm . If the model ﬁts satisfactorily.g...

byrow=TRUE. . "beta".hist(y. . 1). 1/3. "norm". lty=3) abline(v=seq(-2.2 x <.1). col="red".8. nrow=2) n <. . shape2=c(6. . type="l") . 1/3. sd=c( .dthmm(NULL. . main="Beta HMM: Q-Q Plot of Pseudo Residuals") abline(a= . . 1/3. 1) points(z. col="red". 1/3. Pi.7). . . list(mean=c(1. . .seq(-3. type="l") box() qqnorm(y.hist(y. ).1))) w <. byrow=TRUE. 1/3.residuals(x) . c( . 1/3. . 1/2. dnorm(z)*n*(w$breaks[2]-w$breaks[1]). 1/3. 4.matrix(c( . b=1.simulate(x. . . .residuals(x) w <. lty=3) abline(h=seq(-2. lty=3) #----------------------------------------------# Example Using Gaussian Distribution Pi <. 1. 3).dthmm(NULL. main="Gaussian HMM: Pseudo Residuals") z <. . 1. seed=5) y <. 6). Examples # Example Using Beta Distribution 43 Pi <. 1. 1/3. 1/2.seq(-3.3. 1) points(z. . . Pi. seed=5) y <. 1/3. dnorm(z)*n*(w$breaks[2]-w$breaks[1]). nsim=n.residuals Value A vector containing the pseudo residuals. nsim=n.5. 1). 5. n <. . 2. 2. . 3. nrow=5) x <. 2. c( .5. 1/2).2 x <. . 3.matrix(c(1/2.2. main="Beta HMM: Pseudo Residuals") z <.simulate(x. 2))) x <. list(shape1=c(2.

## S3 method for simulate(object. 1). 3.8. lty=3) abline(v=seq(-2. class ’dthmm’ nsim = 1. 5)). 2. lty=3) abline(v=seq(-2. dnorm(z)*n*(w$breaks[2]-w$breaks[1]). ## S3 method for simulate(object. main="Poisson HMM: Q-Q Plot of Pseudo Residuals") abline(a= .) = NULL. nrow=2) x <. 2. "pois".) = NULL.2 x <. byrow=TRUE. list(lambda=c(1. . Usage ## S3 method for simulate(object. c( . seed class ’mchain’ nsim = 1. nsim=n. lty=3) simulate Simulate Hidden Markov Process Description These functions provide methods for the generic function simulate.) .44 box() qqnorm(y. .seq(-3. discrete=TRUE) n <...3. 1). lty=3) simulate #----------------------------------------------# Example Using Poisson Distribution Pi <. b=1. 1).. ## S3 method for simulate(object.) = NULL. col="red". . b=1. lty=3) abline(h=seq(-2.. 1).. 1).. seed class ’mmglm1’ nsim = 1. type="l") box() qqnorm(y. . 2. 2. . .7). main="Poisson HMM: Pseudo Residuals") z <.2. seed class ’mmglm ’ nsim = 1.hist(y. ..dthmm(NULL. seed = NULL. seed=5) y <.simulate(x..residuals(x) w <. lty=3) abline(h=seq(-2. main="Gaussian HMM: Q-Q Plot of Pseudo Residuals") abline(a= .matrix(c( . 1) points(z. Pi. .

1/3. 3)). 1/2. ).simulate ## S3 method for simulate(object. . 1/3. number of points to simulate. #-------------------------------------------# simulate a Poisson HMM x <. class ’mmglmlong1’ nsim = 1. "mmglmlong1" or "mmpp". and the time spent in each state ("mmpp"). nrow=5) Pi <. 1. seed = NULL. respectively..mmpp. Pi. . The code for the methods "dthmm". "mmglm1". 1/3. . seed = NULL. then uniform (0..dthmm. then they are simulated as 1 +rpois(nsim. and "mmpp" can be viewed by typing simulate. "mmglm1". 1/3. simulate. 5.dthmm(NULL. 1/3. other arguments.matrix(c(1/2.mmglmlong1 or simulate.) 45 an object with class "dthmm". When the family is "binomial" and size is NULL (i. Examples # The hidden Markov chain has 5 states with transition matrix: . .mmglm . . Details Below details about particular methods are given where necessary. Value The returned object has the same class as the input object and contains the components that were in the input object. 1/3."mmglmlong1". simulate.mmglm If the covariate x1 is NULL.. byrow=TRUE. on the R command line. the number of Bernoulli trials are not speciﬁed). "mmglm ". list(lambda=c(1. . Arguments object nsim seed . . 1/3. 2. "mchain". .. 1/2). . 4. 1/3. . "pois". Where object is of class "dthmm" it will also have a vector x containing the simulated values.mmglm1.simulate(x. seed for the random number generator.1) variables are generated as the values for x1. discrete = TRUE) x <. "mmglm ". 1/2. When the class is "mmpp" the times of the simulated Poisson events are contained in tau. .e. .) class ’mmpp’ nsim = 1. . Other variables are also added like the sequence of Markov states. lambda=5). c( . . ## S3 method for simulate(object. and when the class is "mmglm " x will be a dataframe. simulate.. simulate. nsim=2 ) .. 1/3. .

4.simulate(x.mmglm . . ). summary.... c( .mmpp.dthmm(NULL. 1. . "mmglmlong1" or "mmpp".mmglmlong1 or summary..1))) # check means and standard deviations for (i in 1:5) print(mean(x$x[x$y==i])) for (i in 1:5) print(sd(x$x[x$y==i])) summary Summary of Hidden Markov Model Description Provides methods for the generic function summary..5. on the R command line.5. . Pi. respectively.. .. sd=c( . nsim=2 ) . "mmglm1". 5. other arguments. summary.. 2..dthmm. .. ’dthmm’ ’mmglm ’ ’mmglm1’ ’mmglmlong1’ ’mmpp’ . "norm". 1.) ## S3 method for class summary(object.46 summary # check Poisson means for (i in 1:5) print(mean(x$x[x$y==i])) #-------------------------------------------# simulate a Gaussian HMM x <. x <. Details The code for the methods "dthmm". summary.) ## S3 method for class summary(object. .) Arguments object . "mmglm1". 3). "mmglm ".mmglm1. list(mean=c(1. . Usage ## S3 method for class summary(object. "mmglmlong1" and "mmpp" can be viewed by typing summary. 1.) ## S3 method for class summary(object. "mmglm ". an object with class "dthmm". ...) ## S3 method for class summary(object.

. . and vector2Q has the reverse effect. 6). 2))) x <. 1). .Transform-Parameters Value A list object with a reduced number of components.2. and vector2Pi has the reverse effect. and back.dthmm(NULL. They use a logit like transformation so that the parameter space is on the whole real line thus avoiding hard boundaries which cause problems for many optimisation procedures (see neglogLik). Similarly. They use a log transformation so that the parameter space is on the whole real line thus avoiding hard boundaries which cause problems for many optimisation procedures (see neglogLik). Usage Pi2vector(Pi) vector2Pi(p) Q2vector(Q) vector2Q(p) Arguments Pi Q p Details The function Pi2vector maps the m by m transition probability matrix of a discrete time HMM to a vector of length m(m − 1). mainly the parameter values. shape2=c(6. c( . the function Q2vector maps the m by m rate matrix Q of an MMPP process to a vector of length m(m − 1). "beta". See Details. Pi. an m by m rate matrix. an m by m transition probability matrix.3. nsim=2 print(summary(x)) ) 47 Transform-Parameters Transform Transition or Rate Matrices to Vector Description These functions transform m × m transition probability matrices or Q matrices to a vector of length m(m − 1).7).8. list(shape1=c(2. a vector of length m(m − 1). byrow=TRUE. Examples Pi <. nrow=2) x <. .simulate(x.matrix(c( .

2. 3.) Arguments object .. This predicts the most likely sequence of Markov states given the observed dataset. "mmglm ".. See Also neglogLik Examples Pi <.. nrow=3) print(vector2Q(Q2vector(Q))) Viterbi Viterbi Algorithm for Hidden Markov Model Description Provides methods for the generic function Viterbi. 1.3. .. .1.) ## S3 method for class Viterbi(object.5). 5. an object with class "dthmm". . ’dthmm’ ’mmglm ’ ’mmglm1’ ’mmglmlong1’ .1.matrix(c( . nrow=3) print(vector2Pi(Pi2vector(Pi))) #-----------------------------------------------Q <. "mmglm1" or "mmglmlong1". other arguments.. and vector2Q returns an m × m rate matrix Q. byrow=TRUE. There is currently no method for objects of class "mmpp". .) ## S3 method for class Viterbi(object. . . . byrow=TRUE.. -7)..3. 3. 5. 2. -4.1..matrix(c(-8..6. the function vector2Pi returns an m by m transition probability matrix.) ## S3 method for class Viterbi(object.48 Value Viterbi The functions Pi2vector and Q2vector return a vector of length m(m−1).8. . . . Usage ## S3 method for class Viterbi(object. . ..

nsim=2 ) #-----Global Decoding ------ states <. · · · . Note that argmax can be evaluated with the R function which. "mmglm ". list(lambda=lambda). respectively.m} Pr{Ci = k | X (n) = x(n) } . .mmglm1 or Viterbi. byrow=TRUE. 3) m <. References Cited references are listed on the HiddenMarkov manual page. nrow=5) delta <.max. . 1/2. 1/3. the argmax will still be the same. ∗ ∗ (k1 .Viterbi Details 49 The purpose of the Viterbi algorithm is to globally decode the underlying hidden Markov state at ∗ ∗ each time point. we calculate sums of the logarithms of probabilities rather than products of probabilities. .dthmm(NULL. . delta. 1/3.2. and are contained in u[i.e.m} The algorithm has been taken from Zucchini (2005).j] ∗ (output from Estep). 1/3. 4. .kn ∈{1.c(1. Value A vector of length n containing integers (1. . 1/3. · · · . discrete=TRUE) x <.e.c( .mmglm .matrix(c(1/2. levels=1:m) . Viterbi. This lessens the chance of numerical underﬂow. ki is simply which.simulate(x. i.nrow(Pi) x <. · · · . · · · . 1. i. ∗ ki = argmax k∈{1. 2. ) lambda <. 1/3. Note that the above probabilities are calculated by the function Estep. The code for the methods "dthmm". .···. kn )= argmax Pr{C1 = k1 . "mmglm1" or "mmglmlong1" can be viewed by typing Viterbi. k1 .···. . 1/3. however. 5.dthmm. Given that the logarithmic function is monotonically increasing. m) representing the hidden Markov states for each node of the chain. Pi. . i.Viterbi(x) states <. 1/3. kn ) which maximises the joint distribution of the hidden states given the entire observed process. . Viterbi. "pois". on the R command line.factor(states. Examples Pi <. .mmglmlong1. .max(u[i. 1/3. 1/2).2. 1/2.···. . Determining the a posteriori most probable state at time i is referred to as local decoding.]). It does this by determining the sequence of states (k1 . Cn = kn | X (n) = x(n) } .e. 1/3. .

list(lambda=lambda))$u[1 #--------------------------------------------------# simulate a beta HMM Pi <. "pois". delta.max(Estep(x$x. x$x[wrong]. "beta". xlab="Time". . . ylab="Observed Process") points((1:n)[x$y==1]. type="l". nsim=n) # colour denotes actual hidden Markov state plot(1:n.Viterbi(x) # mark the wrongly predicted states wrong <. 1) y <.(states != x$y) points((1:n)[wrong]. . lwd=2) . shape2=c(6.50 Viterbi # Compare predicted states with true states # p[j.])) print(which.simulate(x. dbeta(y. pch=15) states <. .c( .3.seq( . 1. cex=2. .8. ylab="Density". list(shape1=c(2. 6). 6. x$x[x$y==1]. 2))) x <. 2.] <. dbeta(y. col="red") n <.k] = Pr{Viterbi predicts state k | true state is j} p <.7). ncol=m) for (j in 1:m){ a <. x$x. pch=1.5. col="blue". pch=15) points((1:n)[x$y==2].matrix(NA.table(states[a])/sum(a) } print(p) #-----# Local Decoding ------ locally decode at i=1 .(x$y==j) p[j.99.1 x <. type="l". 6). col="red". Pi.2. x$x[x$y==2]. nrow=m. col="blue") points(y. nrow=2) delta <. type="l". 1) plot(y. delta.dthmm(NULL. 2). byrow=TRUE.matrix(c( . Pi.

11. 32–35. 15. 5. 25.mmpp.mmpp. 21. 7 BaumWelch. 37 Transform-Parameters.Index ∗Topic classes dthmm. 33 bwcontrol.mmglmlong1. 12. 19 ∗Topic methods BaumWelch. 5. 23. 36–38 BaumWelch. 17 forwardback. 49 Estep. 2. 46 Viterbi. 2 bwcontrol. 19. 46. 32 neglogLik. 21 logLik. 15–17. 11. 21 residuals. 23 mmpp. 36. 49 Estep. 44 ∗Topic distribution compdelta. 30 ∗Topic datagen simulate. 32 Transform-Parameters. 17 probhmm. 10 family. 49 list. 15. 6 BaumWelch. 3. 7 forwardback.dthmm. 4 Change Log. 9. 30 mmpp-2nd-level-functions. 5 compdelta. 33 Binomial. 17 BaumWelch. 9 dnorm. 6. 6 51 Beta. 11 Demonstration. 5 Changes (Change Log). 42 simulate. 6 Estep. 10. 10. 6–8 forwardback. 33 glm. 41 ∗Topic documentation Change Log. 32. 15. 3. 24. 31. 32. 25. 4 Estep. 17. 23. 19. 6. 7 . 8. 8. 10. 18. 2. 9 mchain. 7. 2 logLik. 19. 21.mmpp (mmpp-2nd-level-functions). 5–8.mmglm1 (mmglm-2nd-level-functions). 4.dthmm. 30 Estep. 12 dbinom. 22. 42. 2. 24 HiddenMarkov. 32 Exponential.dthmm. 47 Viterbi. 41. 3. 16. 32 GammaDist. 4. 4–6. 16. 9 HiddenMarkov. 15. 17 forwardback. 35 dpois. 12. 47 ∗Topic optimize BaumWelch. 5. 48 backward (forwardback). 33. 34 logLik. 45. 15 Mstep. 22 mmglm. 8 forwardback. 10. 24 forward (forwardback). 37. 31 Logistic. 19. 5 Demonstration.mmpp. 48. 18.mmpp (mmpp-2nd-level-functions). 10. 48 ∗Topic misc mmglm-2nd-level-functions. 44 summary. 17 dthmm.

44 summary. 19. 21. 48. 4. 46. 35 Multinomial. 48. 45. 12. 16. 45.dthmm. 5–7. 10. 6 vector2Q (Transform-Parameters). 41. 42. 22. 4. 23 mmglmlong1. 3. 30 mmglm . 42. 21. 2. 10 Transform-Parameters. 7 Poisson. 42. 46. 23 mmpp. 30. 5. 8. 35 neglogLik. 23 mmglm-2nd-level-functions. 2. 45. 45 simulate. 19. 3. 47 Viterbi. 42. 7. 10 t. 32 Mstep. 42 rbinom. 19. 46. 21. 7. 19. 38 pglm. 11 residuals. 48 nlm. 7 Lognormal. 7. 21. 2. 42 Q2vector. 19. 6–8. 44. 6 which. 7. 10. 47 INDEX vector2Pi. 7 Viterbihmm. 32 Mstep. 48. 46. 2. 7 mchain. 49 mmglmlong1 (mmglm).mmglm . 6 Q2vector (Transform-Parameters). 37.mmglm1 (mmglm-2nd-level-functions). 48. 5–8. 11 rpois. 3. 42. 37. 6. 7 rnorm. 34 makedistn. 4. 19. 37. 38 Normal. 47 pmmglm. 6 Pi2vector (Transform-Parameters). 49 mmglm (mmglm). 19. 46 T. 19. 34 optim. 10 probhmm. 7.52 logLik. 19. 30 Mstep. 7. 7. 47.mmglm1.norm. 8. 6 logLik. 10. 8.max. 45. 46. 49 mmglm1 (mmglm). 4. 48 mmpp-2nd-level-functions. 6 vector2Pi (Transform-Parameters).mmpp. 47 qnorm. 42 residuals. 48 Viterbi. 37. 49 . 23 mmglm1. 47 vector2Q. 7 Pi2vector. 37. 45 mmglm. 6. 3.

- 9781461440567-c1
- ac
- Random Processes
- Speech Recognition Using Time Domain Freatures From Phase Space Reconstructions
- Pro Bit
- monte carlo ashkin teller
- expm
- Probability via Expectation
- Automating First- And Second-Order Monte Carlo Simulations for Markov Models in Treeage Pro
- Paper
- WSEAS Conference-Pran
- tmp3683.tmp
- Lect Notes 7
- Denumerable Markov Chains
- 1 Reglog Intro
- Advanced Macroeconomic Theory [Lecture Notes] (Guner 2006).pdf
- Hidden Risks by Nicholas Taleb Nassib
- ch5_SA
- Principles of Communication
- Probablity Lecture 1
- InTech-Automating First and Second Order Monte Carlo Simulations for Markov Models in Treeage Pro
- Lec13
- Myths About Process Behavior Charts
- Sample chapter from Yufeng Guo’s study guide
- Shalom Novi
- Forming Limits
- 134-extra
- 01 Probability Theory
- Markov Chain Monte Carlo for Statistical Inference
- B.com 2011 Syllabus Gauhati University

- Personalized Gesture Recognition with Hidden Markove Model and Dynamic Time Warping
- UT Dallas Syllabus for cs4375.501.07f taught by Yu Chung Ng (ycn041000)
- UT Dallas Syllabus for cs6375.501.10f taught by Yu Chung Ng (ycn041000)
- UT Dallas Syllabus for cs4375.501 06f taught by Yu Chung Ng (ycn041000)
- tmpDB70.tmp
- Tmp 7832
- tmpFEC7.tmp
- tmpCE5.tmp
- Supply Chain and Value Chain Management for Sugar Factory
- tmp51A9.tmp
- tmpD7C3
- Convergence of EU regions. Measures and evolution (Eng)/ La convergencia de las regiones de la UE (Ing)/ EBko eskualdeen konbergentzia (Ing)
- tmp9649.tmp
- Speech Recognition Using Hidden Markov Model Algorithm
- tmp108.tmp
- A Study and Comparative analysis of Conditional Random Fields for Intrusion detection
- UT Dallas Syllabus for stat7345.501.10f taught by Robert Serfling (serfling)
- UT Dallas Syllabus for stat7345.501.09s taught by Robert Serfling (serfling)
- frbclv_wp1992-01.pdf
- Speech Recognition using HMM & GMM Models
- A Survey of Online Credit Card Fraud Detection using Data Mining Techniques
- tmp79F5.tmp
- UT Dallas Syllabus for cs4375.501.09f taught by Yu Chung Ng (ycn041000)
- UT Dallas Syllabus for cs4375.001.11f taught by Yu Chung Ng (ycn041000)
- The Impact of Exchange Rate on FDI and the Interdependence of FDI over Time
- tmp3746.tmp
- As IEC 61165-2008 Application of Markov Techniques
- tmpF98F
- rev_frbrich2013q1.pdf
- UT Dallas Syllabus for cs6375.002 06f taught by Yu Chung Ng (ycn041000)

- RLC PDU Size on HSUPA
- ETFCI Optimizaton
- Bootstrap Solutions
- NS2 description
- 15 Simulation
- UTRAN
- Whos Your Daddy
- Bootstrap Methods
- Weekly EURUSD Showing Trend Line Break
- ParableoftheDancingGod
- Forex Trend Line
- Cabbie
- m Spa
- Daily Price Action Log Backtest on EURUSD - PDF
- UMTS CS Call Drop Analysis
- The Way to God
- QQ Plots
- Non Linear Time Series Forecasting
- How to Buy Gold the Right Way
- Wiley.the.Wireless.data.Handbook.4th.edition.oct.1999.ISBN.0471316512
- Peter.smith.information.theory.and.Statistics.jan.1978.ISBN.0844656259
- Central Limit Theorem
- R1-01-0244
- Asset Protection-Untouchable Wealth
- Smart Networks for Smart Devices 15072010 Final
- MIMO Precoding
- Out of the Jungle
- time series analysis
- the Central Limit Theorem
- Signal Package

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading