| |
Abstract:
predictive models similar in spirit to variable memory length
Markov models (VLMMs). The models are constructed by first
transforming the n-block structure of the training sequence into a
spatial structure of points in a unit hypercube, such that the
longer is the common suffix shared by any two n-blocks, the closer
lie their point representations. Such a transformation embodies a
Markov assumption - n-blocks with long common suffixes are likely
to produce similar continuations. Finding a set of prediction
contexts is formulated as a resource allocation problem solved by
vector quantizing the spatial n-block representation. We compare
our model with both the classical and variable memory length Markov
models on three data sets with different memory and stochastic
components. Our models have a superior performance, yet, their
construction is fully automatic, which is shown to be problematic
in the case of VLMMs.
|