**Mathematical Foundations of Large Eddy Simulation**
William Layton
Department of Mathematics
University of Pittsburgh
back to 'Hello
World'
introduction page.
ABSTRACT
This document gives a more detailed survey of our groups ideas
and efforts
studying large eddy simulation from a mathematical
viewpoint.
**1. Introduction.**
Is it possible to predict the motion of larger
eddies in fluid flow
problems without complete resolution of all persistent eddies?
Is it truly
possible to separate modeling errors from numerical errors in
flow
simulations ? How is the error in a flow simulation distributed
among
eddies of different sizes? Are the details of the smaller eddies
needed to
predict the motion of the larger eddies or are some means or
moments
sufficient?
The research program begun by our group at Pitt
is to begin addressing
these questions mathematically, in other words, to develop the
numerical
analysis of large eddy simulation.
For the forseeable future turbulent flow can hardly
be simulated
reliably via *direct numerical simulation* (**DNS**)
because
direct numerical
simulation requires resolution of all persistent eddies.
Assuming
Kolmogorov's description of turbulence valid, the smallest length
scale is
expected to be
O(Re^(-3/4)).
Thus, O(Re^9/4) grid points are required for
**DNS **with a correspondingly large computational effort.
Conventional
turbulence models, such as the k-epsilon model, also cannot be
the answer
since they are based upon loose analogies, guesswork and
data-fitting
a
host of parameters to a particular flow. Because of the
extensive
'calibration' needed to have conventional turbulence models attain
some
minimal levels of accuracy, they can never attain the degree
of
universality that would accompany accurate description of the
physics
of
turbulence.
One approach to modeling turbulent fluctuations
is the analogy between
the mixing effects of turbulent fluctuations and molecular
diffusion
put
forward by Boussinesq in 1877. For example, if u^ denotes either
a time or
ensemble average and u' the fluctuations about u^ , u = u^ + u'
, closure
in all turbulence models is based on this analogy:
div [ u'u' ]^ ~ div{ mu_t ( grad( u^ ) + grad( u^ )tr )
},
where
mu_t = mu_t ( u^ , k , epsilon , ... )
is the turbulent viscosity
coefficient. Because it is based upon this analogy, the
effectiveness
of
turbulence modeling depends ultimately upon the calibration,
tuning,
data
fitting etc. the problem dependent parameters which occur in
mu_t.
This
relation has a certain amount of mathematical support (summarized
well in
the book of Mohammadi and Pironneau). Thus , it seems likely that
most
conventional turbulence models go wrong mainly in how they
compute
approximations to the inputs like k (the kinetic energy in the
turbulent
fluctuations) and in how boundary conditions are selected
for the
appended partial differential equations.
__Large eddy simulation__ (**LES**) is
often described as being based on a
direct approximation of the large vortices, or eddies. The effects
of the
small eddies upon the large eddies are modeled by a systematic
closure
approximation. (We postpone for a bit discussion of, so
called,
sub-gridscale models.) While large eddy simulations are typically
3-d and
evolutionary, hence costly, their advantages include:
*1. Storage and computational costs are independent of the
Reynolds
number.*
One of the goals of **LES **is to
approximate
turbulent flow with
complexity depending only on the resolution sought (the large eddy
scale
and not that of the small fluctuations.)
*2. Consistency:*
closure is based upon approximation rather than
analogy so consistency is automatic.
Our research effort is concerned with two
complementary
approaches to
large eddy simulation:
** (section 2 )**
Numerical analysis of continuum level approximations of
large eddy motion ,
** ( section 3 )**
Direct simulation of large eddy motion.
** ( section 4 )**
Analysis of Scale-Similarity models.
** ( section 5 )**
Reliability estimation via parallel ensemble calculation.
** ( section 6 )**
Better subgrid scale modelling.
The first, numerical analysis of continuum models of large eddy
motion, is
similar in spirit to traditional **LES** in that the Navier
Stokes equations
are filtered, closure is addressed, a continuum model is derived
and then
solved approximately. We are developing two improvements in this
approach
:
(section 2.1) __Mathematically sound continuum models of
large eddies,__
(section 2.2 ) __Accurate boundary conditions for
LES.__
First while the ideas behind the
derivation
of common models in LES
are well grounded physically, their mathematical execution is (
in our
view) fundamentally flawed. Since these models are approximations
, the
filtered true solution of the Navier Stokes equations, inserted
into the
model has small residual in some sense. This does not imply,
of course,
that the solution of the model is close to the filtered true
solution
of
the Navier Stokes equations. To our knowledge, it has never been
proven
that solutions of any **LES** model or turbulence model are
close to the true
desired flow averages. Analytical components of the project are
attempting
to give a full mathematical description of the modelling error
for a new
family of **LES** models. In section 2, our approach is
presented.
The difference between the true solution of
the eddy viscosity model
and the filtered solution of the Navier Stokes equations satisfies
an
equation driven by the aforementioned residual. Thus the
conclusion
holds
only when the eddy viscosity model is stable in the right sense.
While
common LES models do not satisfy this important condition, we
proposed
a
closure approximation in
[GL98] G.P. Galdi and W. Layton, __Approximating the larger
eddies
in fluid__
__motion II: a model for space filtered flow__, to appear in:
Math.Methods and
Models in the Appl. Sciences, 1998.
which leads to a new family of LES models which appear to have
the
stability property needed to conclude its solutions closely
approximate
the large eddies in the flow field. The next step in our research
involves
completing the proof that the solutions of the model closely
approximates
the true large eddies and then developing the model further and
testing it
in computational experiments.
Problems in the boundary conditions for LES
arise because there is a
basic difficult in the filtering process near boundaries.
Commonly,
either
'no-slip' boundary conditions or an analogous wall law are
imposed.
However, it is easy to see in the next figure that , while
no-penetration
conditions are correct for the large eddies, Dirichlet
boundary
conditions are not.
Fluid Region
| ------->
| ------>
| ----->
| ---->
| --->
| -->
| ->
**_______________*___________ tau ->**
**///////// Solid Wall ////////**
Imagine a ball of positive radius centered at the point
*.
Average the flow field over that ball. u is extended by zero
outside
the flow region.
Clearly, u^.n = 0 but u^ . tau is non-zero!
Figure1.1.
w^ . tau = 0 on the boundary is not the correct
condition even for laminar boundary layers!
Researchers have often reported that LES
simulations
have difficulties
predicting flow fields near boundaries accurately. These large
errors near
boundaries are, in our view , introduced in traditional approaches
to **LES**
due to the imposition of incorrect boundary conditions upon the
eddy
viscosity models. This fact is increasingly being recognized in
**LES**. For
example, there is an increasing quantity of work studying models
where
delta =delta(x), where delta(x)->0 as x-> walls
,
for example,
delta(x) = min{ delta_0 , dist{x,walls} }.
This causes great practical difficulties even in the formal
modelling
steps but it at least allows one to apply no-slip boundary
conditions
on
u^. However, it also implies all boundary layers must be fully
resolves-
obviating many of the practical benefits of **LES**!
Furthermore,
there are
serious mathematical reasons to keep delta constant-some are
detailed
in
section 2 below. Still, if there were no other possibility for
more
consistent boundary treatment, this program would have to be
followed.
Fortunately, there are at least two other approaches which are
simpler and
can be persued in parallel to the work on variable delta's. We
describe
these other two approaches herein.
The first of these new procedures for correcting the
boundary
conditions
__for constant delta (!)__
imposed in any eddy viscosity model was given in [GL98]. This
boundary
treatment has fully developed by a Ph.D. student, Mr. Niyazi
Sahin.
This
work , including laminar layers and turbulent power law layers
and log-law
layers is presented in his report:
[S99] N.Sahin, __Boundary conditions for large eddy
simulation__,
tech.
report,1999.
Even the simplest of the new eddy viscosity
models lead to interesting
new algorithmic challenges. The last aspect of this approach to
LES
studied will be algorithm design and validation for eddy viscosity
models.
While the
proposed
research on continuum models of large eddies
is inspired by 28 years of development of large eddy simulation
by the
engineering community, the second, direct simulation of large eddy
motion,
is (to the best of our knowledge) a completely new approach to
large eddy
simulation. This approach is straightforward and mathematically
honest:
the Navier Stokes equations are discretize by a "good" method
(such
as
streamline diffusion methods or stabilized defect correction
methods,
e.
g., [ELM98]). The approximate solution (uh, ph) is then
post-processed:
Approximate NSE: ( u , p )
|----> ( uh , ph ),
Post-process : ( uh , ph )
|--->( g*uh , g*ph ) approximates (u^ , p^ ).
This approach requires no closure approximation and no elaborate
boundary
treatment. However, for it to be computationally feasible there
are at
least two requirements:
1. *The error in the large eddies should be much smaller than
the error in*
*the flow field. *(Otherwise it will require the storage and
computational
effort of direct numerical simulation.)
Surprisingly, in initial first steps, we have
shown, in
[JL98] V John and W Layton, __Approximating the larger eddies
in fluid__
__motion I: direct simulation for the Stokes problem__,
submitted
to SIAM JNA,
1998
that this approximation to the large eddies can possess
hidden
accuracy
without full resolution of the details of the small
eddies.
2. *A mesh refinement strategy designed to guarantee the
accuracy
of the*
*approximation to the large eddies*.
In [JL98] the we also begun this task: two mathematically correct
a
posteriori error estimators are developed for estimating the error
in the
large eddies. With either it is
possible
to refine to attain a
prespecified guarranteed acuracy in the flow field's large eddies.
The
meshes so designed are locally refined yet much closer to uniform
meshes
than when an estimated for the energy norm error is used. Further
the
estimated errors do satisfy:
Estimator( || uh^ - u^ || ) < < Estimator( || uh -
u || ).
These a posteriori error estimators were proven correct for the
Stokes
problem in [JL98]. Mathematical structures for their extension
to be fully
nonlinear Navier Stokes equations seem to be in place already in
the
P.I.'s work. In this proposed research, the estimators will be
further
developed for the 3-d evolutionary Navier Stokes
equations.
Computational experiments, tests and their
benchmarking
will be
performed in conjunction with a joint effort between our group
at Pitt and
Professor Lutz Tobiska's research group in Magdeburg, Germany.
This joint
effort is to develop a 3-d (object oriented) adaptive, parallel
C++ code
for the Navier Stokes equations, including modules handling free
surfaces,
turbulence and LES . Other portions of the theoretical and
computational
research effort will be performed in collaboration with several
strong
Ph.D. students in our group.
**2. Continuum Models of Large Eddy Motion.**
The usual approach to LES is to derive a continuum model for the
motion of
large eddies. This introduces "modeling errors". When the
continuum
model
is discretized and solved, additional "numerical errors" are
introduced.
Our research in [GL98] has focused on first driving
continuum
models of
large eddies which better capture the evolution of local flow
averages.
Second, mathematical support for the continuum model has been
started
in
our report:
[GIL99] G.P. Galdi, T. Iliescu and W. Layton, report in
preparation.
Third, numerical analysis of algorithms for the new eddy viscosity
model
is currently under study, as are experiments comparing its
predictions
to
both conventional models and benchmarks.
First steps in driving improved models for large eddy motion have
all
ready been taken in [GL98]. Recall that the most common definition
of
large eddies in LES is the local flow average with a scaled
Gaussian:
u^(x, t) = g * u(x, t),
where
g(x) = (1/delta^d) g_0(x/delta),
g_0(x): = a Gaussian distribution.
Here g*u denotes the usual convolution of g with u. Before
proceeding ,
it's worthwhile to note the most important properties of g*u =
u^:
__1. If delta is CONSTANT then differentiation commutes with
averaging.
For__
__any derivative d/dx :__
g*( du/dx) =
d/dx(g*u).
This is an important property for
deriving models.
__2. u^ is infinitely differentiable, provided delta is
CONSTANT.__
__3. The kinetic energy in u^ is bounded by that of u, if delta
is CONSTANT:__
KE(
u^(t)
) <= KE( u(t) ) <= C( f, u(0), Re, t).
__4. The smallest scale occuring in u^ is O(min{delta(x) : x in
the flow region}.__
Suppose the fluid motion is decomposed, as usual (1), into
means and
fluctuations: u = u^ + u'. Then, the quadratic term in the space
filtered
Navier Stokes equations leads to at least three terms requiring
closure:
(uu)^ = (u^u^)^ + (u^u') ^ + (u'u^) ^ + (u'u') ^.
In , e.g. , common LES models the right hand side of the
above is
approximated using Taylor series in the averaging radius delta
by terms
involving the flow means only and the following system results
for w which
denotes the approximation to u^.
__The Most Usual Model__:
div w = 0
dw/dt + div( ww ) - grad q - (1/Re) div grad
w - div K = g*f,
where
K_ij = delta^2 w_i,x_m w_j,x_m ,
(abbreviated as delta^2 grad w grad w).
_______________________________________________________
(1) It is interesting to note that this decomposition,
intensively
exploited by Reynolds, was suggested by Leonardo da Vinci in
1500's
in his
descriptions of the eddies behind a blunt body.
________________________________________________________
The true flow average u^ = g * u not only has eddies smaller than
O(delta)
suppressed by the averaging process but high frequencies are
suppressed
as
well: u^ is infinitely differentiable. The commonly used
closure
approximations based upon Taylor expansions actually have the
undesirable
effect of actually increasing the high frequencies in the
approximation
w.
We have studied closure approximation based upon rational
approximations
rather than truncated Taylor series. The new aproximation, when
developed
into a continuum model, is of the same formal accuracy in the
large
eddies
as the usual one. Further, it attenuates high frequencies as
required
.
The simplest case of the new modeling procedure yields the
initial,
boundary value problem:
__A Regularized Model:__
div w =
0,
dw/dt + div( ww ) -
grad q - (1/Re) div grad w - div H = g*f,
- delta^2 div grad(
H_ij ) + H_ij =
K_ij,
where, as before,
K_ij = delta^2 w_i,x_m w_j,x_m ,
(abbreviated as delta^2 grad w grad w).
Mathematical validation of this model has begun in [GIL90]. We
are also
exploring :
*Models arising from more accurate rational
approximations,*
*Continuum models arising from space time averaging of the
Navier
Stokes*
*equations,*
*Continuum models based upon a 3 field decomposition into space
time*
*averages, space averages and fluctuations.*
** 3. Direct Simulation of Large Eddy Motion.**
Picking a length scale delta, the question of LES is simply how
to
approximate u^: = g*u with much greater accuracy and much less
cost than
approximating u. The approach taken in [JL98] is to take advantage
of some
special features of finite element approximations in the
filtering
process. First (u h, p h) is calculated by approximate solution
of the
unfiltered Navier Stokes equations. Next the approximate large
eddy motion
is obtained through post-processing by spacial filtering:
uh^ := g * uh
,
ph^: = g * ph.
Since not all spatial scales are resolved in the approximations
u h to u
we do not expect high accuracy here. In fact, it is expected (and
known
rigorously in model problems at least) that errors in this
approximation
will be highly oscillatory. The approximations uh to u might also
be
oscillatory due to unresolved solution scales. Heuristically, a
local
averaging operator might then kill these unwanted oscilations and
hence
the leading order error terms as well. Thus, there are reasons
to expect
|| uh^-u^ || to be much smaller than || uh-u || . Indeed, in our
work it
was proven that with a degree k piecewise polynomial
velocity
approximation the error estimate:
(3.1) || u^-uh^ || < C
(h/delta)^(k-1)
h || grad ( u - uh ) ||
holds, where h is a representative meshwidth and || * || denotes
the L2
norm. As a first step this has been proven for the Stokes problem.
The
present effort is to extend the understanding and experience with
this
research to the Oseen problem followed by the full, evolutionary
Navier
Stokes equations.
The estimate (3.1) shows that accurate
resolution of the flow field
is not needed to resolve the field's large eddies. This is because
of the
finite element properties of having highly oscillatory leading
order error
terms and the error cancellation properties of spatial filtering
with a
symmetric filter.
As research moves to more and more difficult
problems, the importance
of adaptivity based on good a posteriori error estimators
increases.
Thus
an important component of correct simulation of large eddy motion
is to
design a mesh refinement strategy, with full mathematical support,
which
provides guaranteed accuracy in the large eddies. To this end,
two a
posteriori error estimators were given in [JL98] for ||
uh^-u^
|| . These
estimators allow mesh cells to be marked for refinement only when
the mean
motion of the local small eddies influences the target accuracy
in the
large eddies. The first global estimator is established by
duality
techniques. It reflects the extra accuracy of the a priori
estimate
(3.1).
The second estimator of this report is more computationally
intricate.
As
compensation, to sided error bounds are proven for the second
estimator.
The sum of the local error indicators is a bound for ||
u^-uh^
|| above
and the local indicator provides a lower bound to the local
error on a
patch of elements.
**4. Scale-Similarity Models.**
Scale similarity models are based upon a 'scale
similarity assumption,
namely that
**unresolved eddies interact with the O(delta) eddies in
the**
**same way O(delta) eddies interact with O(2 delta)
eddies.**
This postulate
was advanced in 1980 by Bardina, Ferziger and Reynolds. Like
every
description in turbulent flow, it is quite successful at times
and less so
at other times. (Perhaps because generally only successes
are published
there is a dearth of clear descriptions of when each approach
fails.
This
knowledge is more valuable than successes! ) Another way to think
of the
hypothesis is that is is a sort of extrapolation procedure which
work
generally if there's a sort of regular pattern in the data over
the range
extrapolated. Let T denote the Reynolds stress tensor in the
space
filtered flow equations:
T := (g*u) (g*u) - g*(u u) .
The most straightforward scale similarity assumption is that T
is modeled
by
T ~ (g*g*u) (g*g*u) - g*( g*u g*u).
The most basic such model is thus (where w denotes the
resulting
approximation for g*u and q for g*p ):
div w = 0 ,
dw/dt + div( w w ) + grad q - (1/Re) div
grad w
- div( g*w g*w - g*( w w ) ) = g*f.
Actually, this system is seldom solved approximately; authors hint
at
catastrophic blow up of solutions over long time intervals. (This
is
another case where a clear negative result would be very
interesting.)
This sort of growth is typically 'cured' by including a
Smagorinski
type
subgrid scale model in the system. The included p-Laplacian then
gives
powerful control over polynomial nonlinearities-but is it
physically
correct?
This reasoning and the theoretical studies in
[GIL99] point to the
**KINETIC ENERGY** in the large eddies as the key functional
to track.
Mathematically, Young's inequality implies:
Kinetic Energy( g*u(.,t) ) <= Kinetic Energy( u(.,t)
) < infinity,
for any t>0.
The same cannot be said for solutions w(x,t) of large eddy models!
It is
our view that a good model for **LES** must satisfy the
following
requirement:
"**The kinetic energy in the solution of the model of the
large eddies**
**must be provably bounded for all t>0 without the inclusion of
ANY subgrid**
**scale model. It is even better if the model satisfies a kinetic
energy**
**equation similar to the one for he Navier Stokes
equations.**"
This simple test of physical and mathematical reasonableness
apparently
is
seldom checked in LES.
Using the scale similarity hypothesis is a different
way, we have
developed a first scale similarity model satisfying this
requirement
in:
[L99] W. Layton, __Approximating the large eddies in fluid motion
IV:__
__Kinetic energy balance of a new scale similarity model__,
technical report,
1999.
**5. Calculation of ensembles.**
One fundamental, persistent and intractable
problem in computational
fluid dynamics is to evaluate the reliability of a calculation.
A
posteriori error estimators are a very important approach in
increasing
reliability. Until [JL98] only energy norm and L2 norm error
estimators
have been available. These are of little practical use in
turbulent
flow
problems. The estimators of [JL98] and under development in this
proposal
are an important step towards ensuring reliability of flow
calculations.
All error estimators, however, lead to the problem of evaluating
the
sensitivity of the continuous problem to data perturbation (i.
e., the
constant
||DFh(uh,ph)^(-1)||L(Xh,Xh)
in, for example,
[ELM98] V. Ervin, W Layton and J. Maubach, __Adaptive defect
correction__
__methods for high Reynolds number flow problems,__ to appear
in SIAM JNA).
In engineering practice a calculation is often accepted when
the same
qualitative flow features appear on two successive meshes. Of
course,
"benchmarking" is done on CFD codes; usually this is done, however
to
calibrate the current rather than to evaluate it. For example,
in the
recent comparative benchmark tests sponsored by the DFG , the
initial
results of the codes were all over the place; once a benchmark
interval
was published, suddenly each codes results were square in the
middle
of
the published benchmark interval.
One technique for enhancing reliability, which hasn't yet been
fully
exploited in the Mathematical CFD community, is to compute
ensembles:
several flow fields are simultaneously computed which correspond
to
perturbations of the problem data. The deviations of the
required
functionals of the flow field are then examined vis-a-vis the
sizes
of the
data perturbations.
Our research is examining questions related to enhancing
reliability
by
ensemble computations. Some aspects include:
*1. Compute ensembles with negligible computational effort over
a single solution.*
*2. Use ensemble calculation to provide information on
approximate*
*stability constants.*
*3. Using ensemble algorithms to simultaneously compute fluid
sensitivities*
*and flow fields.*
There has been some effort in numerical
analysis
on solving linear and
nonlinear algebraic systems with multiple right hand sides. The
first
research step has been to adapt this work from generic problems
to the
discretized Navier Stokes equations. Next, using ideas that occur
already
in condition number estimation in linear systems, we are studying
using
the ensemble calculations to estimate the stability constant
inside
the
adaptive algorithms refinement test and termination test.
Including
an, on
the fly, estimation of this important constant will be an
important
step
toward fully reliable turbulent flow simulation.
**6. Better Subgrid Scale Modelling.**
Recall the Boussinesq model of turbulent
fluctuations:
(6.1) div [ u'u' ]^ ~ div{ mu_t (
grad( u^ ) + grad( u^ )tr ) }.
This model is based on an analogy between the interaction of small
eddies
and elastic collisions of billiard ball like molecules. In spite
of this
tenuous analogy, mathematical support for (6.1) has been provided
for
models like convection-diffusion problems. (This is presented well
in the
book of Pironneau and Mohammadi.) If we accept the model (6.1)
, and there
seems to be nothing better supported available in turbulence
studies,
then
clearly the turbulent viscosity must depend only on:
i)the kinetic energy in the small eddies, k',
ii)the length scale of the averaging process, delta.
Of course, finding an approximation for k' in terms of u^ is
challenging.
In the report
[IL99] T Iliescu and W Layton,__ Approximating the larger eddies
in fluid__
__motion III:the Boussinesq model for turbulent fluctuations,__
to appear ,
1999,
this was done. Using this formula for k' and the
Prandtl-Kolmogorov
relation we develop three new subgrid scale models:
mu_t = C delta^2 |g* (div grad u ) |
mu_t = C delta^2 | u - g*u |
mu_t = C delta^2 | div grad u |.
The implementation of the first and last is unclear for C0
finite
elements. We show how this is accomplished using edge jumps and
prove that
at its largest, it is bounded by the Smagorinski model. It
vanishes
for
laminar flows for which the Smagorinski model is overly diffusive.
Lastly,
an existence theory is presented for the new model.
Back to my
introduction
page.