This post is an incomplete speculative synthesis of several previous posts:
The question is whether you can have any expectation of a future model equilibrium besides the model-consistent (i.e. rational) expectation of a maximum entropy model? Or another way: Is the information loss required for the time translation (inverse Koopman) Et operator  the same information loss in reaching the maximum entropy state?
This is true for a normal distribution. The information in the initial distribution besides the mean and variance is exactly the information lost with the (inverse) Et operator.
Consider the KL divergence between an arbitrary distribution Q of given mean and variance and a normal distribution N(μ,σ): DKL(N||Q) ≡ ΔI. Propagation of Q into the future (via inverse of Et) leads to normal distribution (central limit theorem), resulting in information loss ΔI. Additionally, the normal distribution is maximum entropy distribution constrained to have a given mean and variance, so the approach to maximum entropy (disappearance of the initial condition Q information) will also yield a normal distribution, and therefore the same information loss DKL(N||Q) ≡ ΔI.
Does this always work out? We'd like to ask the question: Is whether every universal distribution (with constraint C) is also a maximum entropy distribution (with constraint C)?
One issue is that the uniform distribution is the maximum entropy distribution for a bounded random variable, but not universal in the sense of the central limit theorem (samples from the universal distribution that propagate in time via Et become a normal distribution). However, any distribution can be related to a uniform distribution (probability integral transform), so maybe this issue isn't as problematic as it first appears. I'll see if I can work out a proof**.
For right now, this post is just capturing my half-baked thinking. Here is my intuition in terms of economics. The key point is that agents must expect the information loss (in any model) because otherwise the operator Et is ill-defined (the inverse of a non-invertible operator ). Additionally, information loss (about the initial conditions) is exactly what happens when a system drifts toward is entropy maximum. Purchasing goods and services (i.e. transactions that propagate an economy into the future) consumes to the information in the prices (one way to think about the efficient markets hypothesis ), therefore the economic model must be a maximum entropy model  in order for the model to contain Et operators (expectations) and propagate the system towards the future equilibrium given initial conditions (losing information as both entropy maximizing and via the Et operator).
This will fail if there is a failure of information equilibrium (non-ideal information transfer) because then you're not in an equilibrium (and therefore not moving towards a maximum entropy state) which breaks the connection between the information loss in the time translation and the entropy maximizing process.
This may seem like a word salad, but I swear it contains some useful intuition.
** I have a nagging feeling is that what you end up with is something where the information loss in the entropy maximizing process is proportional to the information loss in the expectations/time translation (into the future) process -- resulting in a condition that is precisely an information equilibrium condition specific to the distributions involved (for uniform distributions, you end up with the basic information equilibrium equation of Fielitz and Borchardt) with the information transfer index relating to the proportionality. I'd probably have to consider the propagation into the future as an information equilibrium relationship between the future and the present .