# OverPimpleDyMFoam

OverPimpleDyMFoam

Transient solver for incompressible flow of Newtonian fluids on a moving mesh using the PIMPLE (merged PISO-SIMPLE) algorithm. Turbulence modelling is generic, i.e. laminar, RAS or LES may be selected.

## 1 Solution Strategy

The solver follows a segregated solution strategy. This means that the equations for each variable characterizing the system (the velocity $\bold u$, the pressure $p$ and the variables characterizing turbulence) is solved sequentially and the solution of the preceding equations is inserted in the subsequent equation. The non-linearity appearing in the momentum equation (the face flux $\phi_f$ which is a function of the velocity) is resolved by computing it from the velocity and pressure values of the preceding iteration. The dependence from the pressure is introduced to avoid a decoupling between the momentum and pressure equations and hence the appearance of high frequency oscillation in the solution (check board effect). The first equation to be solve is the momentum equation. It delivers a velocity field $\bold u^*$ which is in general not divergence free, i.e. it does not satisfy the continuity equation. After that the momentum and the continuity equations are used to construct an equation for the pressure. The aim is to obtain a pressure field $p^{n}$, which, if inserted in the momentum equation, delivers a divergence free velocity field $\bold u$. After correcting the velocity field, the equations for turbulence are solved. The above iterative solution procedure is repeated until convergence.

The overset method allows to solve the governing equaiton on a set of disjoint meshes, i.e., the meshes are not connected over faces. The coupling between the differnt meshes is done over an implicit interpolation.

The source code can be found in overPimpleDyMFoam.C



/*---------------------------------------------------------------------------*\
=========                 |
\\      /  F ield         | OpenFOAM: The Open Source CFD Toolbox
\\    /   O peration     |
\\  /    A nd           | www.openfoam.com
\\/     M anipulation  |
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
This file is part of OpenFOAM.

OpenFOAM is free software: you can redistribute it and/or modify it
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

OpenFOAM is distributed in the hope that it will be useful, but WITHOUT
ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License
for more details.

You should have received a copy of the GNU General Public License
along with OpenFOAM.  If not, see <http://www.gnu.org/licenses/>.

Application
overPimpleDyMFoam

Group
grpIncompressibleSolvers grpMovingMeshSolvers

Description
Transient solver for incompressible flow of Newtonian fluids
on a moving mesh using the PIMPLE (merged PISO-SIMPLE) algorithm.

Turbulence modelling is generic, i.e. laminar, RAS or LES may be selected.

\*---------------------------------------------------------------------------*/

#include "fvCFD.H"
#include "dynamicFvMesh.H"
#include "singlePhaseTransportModel.H"
#include "turbulentTransportModel.H"
#include "pimpleControl.H"
#include "fvOptions.H"

#include "cellCellStencilObject.H"
#include "localMin.H"
#include "interpolationCellPoint.H"
#include "transform.H"
#include "fvMeshSubset.H"

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

int main(int argc, char *argv[])
{
(
"Transient solver for incompressible, turbulent flow"
" on a moving mesh."
);

#include "postProcess.H"

#include "setRootCaseLists.H"
#include "createTime.H"
#include "createDynamicFvMesh.H"
#include "initContinuityErrs.H"

pimpleControl pimple(mesh);

#include "createFields.H"
#include "createUf.H"
#include "createMRF.H"
#include "createFvOptions.H"
#include "createControls.H"
#include "CourantNo.H"
#include "setInitialDeltaT.H"

turbulence->validate();

// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //

Info<< "\nStarting time loop\n" << endl;

while (runTime.run())
{
#include "CourantNo.H"

#include "setDeltaT.H"

++runTime;

Info<< "Time = " << runTime.timeName() << nl << endl;

bool changed = mesh.update();

if (changed)
{
#include "setInterpolatedCells.H"

(
);

// Zero Uf on old faceMask (H-I)
// Update Uf and phi on new C-I faces
phi = mesh.Sf() & Uf;

// Zero phi on current H-I
(
);
}

if (mesh.changing() && correctPhi)
{
// Calculate absolute flux from the mapped surface velocity
#include "correctPhi.H"
}

// Make the flux relative to the mesh motion
fvc::makeRelative(phi, U);

if (mesh.changing() && checkMeshCourantNo)
{
#include "meshCourantNo.H"
}

// --- Pressure-velocity PIMPLE corrector loop
while (pimple.loop())
{
#include "UEqn.H"

// --- Pressure corrector loop
while (pimple.correct())
{
#include "pEqn.H"
}

if (pimple.turbCorr())
{
laminarTransport.correct();
turbulence->correct();
}
}

runTime.write();

runTime.printExecutionTime(Info);
}

Info<< "End\n" << endl;

return 0;
}

// ************************************************************************* //


## 2 Equations

The equation of motion used for moving meshes are written in the Arbitrary-Euler-Lagrange (ALE) formulation. This formulation is one of the most popular if morphing meshes are used to describe the solid body deformation or displacement [1]. If the overset method is used, this formulation avoids to formulate the equation of motion in multiple frames of references [2]. For the derivation of the equation see [3] [4]. The continuity and momentum equation read in this form:

 - $\frac{d}{dt}\int\rho dV + \oint\rho \mathbf{n} \cdot (\mathbf{U} - \mathbf{U_g} ) dS = 0.$ (1)

 - $\frac{d}{dt}\int\rho \mathbf{U} dV + \int\rho \mathbf{\omega} \times \mathbf{U} dV + \oint\rho \bold{n} \cdot (\mathbf{U} - \mathbf{U_g} ) \mathbf{U} dS - \oint \rho \nu \bold{n} \cdot \nabla \mathbf{U} dS + \oint p \bold{n} dS = 0.$ (1)

In the above equations $\U$ is the fluid velocity, $p$ the pressure, $\nu$ the molecular viscosity and $\rho$ the density. The only difference to the equation of motion in a fixed mesh is the introduction of a new variable, i.e. the grid velocity $\mathbf{U_g}$. In order to determine the grid velocity the space conservation law (SCL) has to be solved (see also [5] [6])

 - $\frac{d}{dt}\int dV - \oint \bold{n} \cdot \mathbf{U_g} dS = 0.$ (1)

In order to not introduce additional mass conservation errors, the temporal discretisation of the SCL should be the same as used in the other conservation equation (see also [7] [8])

OpenFOAM uses a slightly different continuity equation (which is used to derive the equation for the pressure). By inserting the mass conservation law into the continuity equation we get for the spatial case of incompressible case:

 - $\frac{d}{dt}\int\rho dV + \oint\rho \mathbf{n} \cdot (\mathbf{U} ) dS = 0.$ (1)

It is the same as the one for fixed grids. Note this transformation is valid only for incompressible flows.

### 2.1 Momentum Equation

In this section we will analyze the momentum equation implemented in the solver a bit more in detail. Es already seen above, the momentum equation reads:

 - $\frac{d}{dt}\int\rho \mathbf{U} dV + \int\rho \mathbf{\omega} \times \mathbf{U} dV + \oint\mathbf{U} \underbrace{ \rho \bold{n} \cdot (\mathbf{U} - \mathbf{U_g} )}_\phi dS - \oint \rho \nu \bold{n} \cdot \nabla \mathbf{U} dS + \oint p \bold{n} dS = 0.$ (1)

$\phi$ is the mass flux over the surface $dS$. $\mathbf{\omega}$ is the angular velocity of the rotating frame of reference. We see that the momentum equation is essentially the same as the one in fixed control volume. See, e.g. https://openfoamwiki.net/index.php/SimpleFoam. The only difference is how the phase flux $\phi$ is calculated. In a fixed mesh, only the velocity in the inertial coordinate system $\mathbf{U}$ is used to calculate it, while for the overset mesh the difference between the velocity in the inertial coordinate system <math< \mathbf{U} [/itex] and the grid velocity $\mathbf{U_g}$ is used.

The source code of the momentum equation is shown above. We see that in front of the pressure equation the field cellMask is applied. It is 0 for hole cells and 1 elsewhere. This means that for background cells which are covered by a solid structure the pressure gradient term does not give any contribution to the momentum equation.



// Solve the Momentum equation

MRF.correctBoundaryVelocity(U);

tmp<fvVectorMatrix> tUEqn
(
fvm::ddt(U) + fvm::div(phi, U)
+ MRF.DDt(U)
+ turbulence->divDevReff(U)
==
fvOptions(U)
);
fvVectorMatrix& UEqn = tUEqn.ref();

UEqn.relax();

fvOptions.constrain(UEqn);

if (pimple.momentumPredictor())
{

fvOptions.correct(U);
}



### 2.2 Pressure Equation

Since the continuity equation for incompressible flow written in an AEL (arbitrary Euler-Lagrange) formulation is the same as for fixed grid (see previous section), also the pressure equation is basically the same. For this reason the derivation of the pressure equation is omitted. For a detailed derivation see https://openfoamwiki.net/index.php/SimpleFoam and the references therein. In this section we will concentrate on the peculiarities of the pressure equation introduces for the overset method.

The code looks like:



// Option 1: interpolate rAU, do not block out rAU on blocked cells
volScalarField rAU("rAU", 1.0/UEqn.A());
mesh.interpolate(rAU);

// Option 2: do not interpolate rAU but block out rAU
//surfaceScalarField rAUf("rAUf", fvc::interpolate(blockedCells*rAU));

// Option 3: do not interpolate rAU but zero out rAUf on faces on holes
//
//   H
// H I C C C C
//   H
//
surfaceScalarField rAUf("rAUf", fvc::interpolate(rAU));
volVectorField H("H", UEqn.H());

volVectorField HbyA("HbyA", U);
HbyA = constrainHbyA(rAU*H, U, p);

if (massFluxInterpolation)
{
#include "interpolatedFaces.H"
}

if (runTime.outputTime())
{
H.write();
rAU.write();
HbyA.write();
}

if (pimple.nCorrPISO() <= 1)
{
tUEqn.clear();
}

phiHbyA = fvc::flux(HbyA);

if (ddtCorr)
{
(
);
}

MRF.makeRelative(phiHbyA);

// WIP
if (p.needReference())
{
fvc::makeRelative(phiHbyA, U);
fvc::makeAbsolute(phiHbyA, U);
}

{
fvc::makeRelative(phiHbyA, U);
fvc::makeAbsolute(phiHbyA, U);
}

while (pimple.correctNonOrthogonal())
{
fvScalarMatrix pEqn
(
fvm::laplacian(rAUf, p) == fvc::div(phiHbyA)
);

pEqn.setReference(pRefCell, pRefValue);

pEqn.solve(mesh.solver(p.select(pimple.finalInnerIter())));

if (pimple.finalNonOrthogonalIter())
{
phi = phiHbyA - pEqn.flux();
// option 2:
}
}

#include "continuityErrs.H"

// Explicitly relax pressure for momentum corrector
p.relax();

// Option 2: zero out velocity on blocked out cells
// Option 3: zero out velocity on blocked out cells
// This is needed for the scalar Eq (k,epsilon, etc)
// which can use U as source term
U.correctBoundaryConditions();

fvOptions.correct(U);

{
Uf = fvc::interpolate(U);
surfaceVectorField n(mesh.Sf()/mesh.magSf());
Uf += n*(phi/mesh.magSf() - (n & Uf));
}

// Make the fluxes relative to the mesh motion
fvc::makeRelative(phi, U);

(
);



Looking at the source code we see that we have two code passages which try to correct the mass flux imbalance produced by the overset interpolation:


if (massFluxInterpolation)
{
#include "interpolatedFaces.H"
}


and


{
fvc::makeRelative(phiHbyA, U);
fvc::makeAbsolute(phiHbyA, U);
}

They are activated by setting the switches adjustFringe and massFluxInterpolation in the PIMPLE subdict of the fvSolution dictionary to yes.

If the adjustFringe switch is set to yes, the code loops over all faces. After that it checks if the faces are at the border of a calculated and an interpolated cell. If yes, the mass flux pointing into the calculated cell and the mass flux pointing outside the cell is summed over all faces in a mesh zone. Then a correction factor is calculated which is the ratio between sum over all mesh zone faces of the positive mass flux going over the boarder of calculated and interpolated cells and the negative mass flux. This correction factor is applied to all faces which are at the boarder of calculated and interpolated cells. In this way the global mass balance over a mesh zone is enforced. Note that a mesh zone is a topological connected region of the overset mesh.

The first part of the code is shown below. The flux is defined pointing from a calculated cell towards an interpolated cells. The code fist loops over all internal faces and calculates the mass flux going into the calculated cell bordering a interpolated cell (negative) and outside the calculated cell bordering a interpolated (positive). The same is done by looping over the boundary faces. In order to determine the type of cells on the neighbor processor the function
 syncTools::swapBoundaryCellList(mesh, cellTypes, neiCellTypes);

is called.

forAll(own, facei)
{
label zonei = zoneID[own[facei]];   // note:own and nei in same zone

label ownType = cellTypes[own[facei]];
label neiType = cellTypes[nei[facei]];

bool ownCalc =
(ownType == cellCellStencil::CALCULATED)
&& (neiType == cellCellStencil::INTERPOLATED);

bool neiCalc =
(ownType == cellCellStencil::INTERPOLATED)
&& (neiType == cellCellStencil::CALCULATED);

if (ownCalc || neiCalc)
{
// Calculate flux w.r.t. calculated cell
scalar flux = phi[facei];
if (neiCalc)
{
flux = -flux;
}

if (flux < 0.0)
{
massIn[zonei] -= flux;
}
else
{
}
}
}
}

// Check all coupled faces on the outside of interpolated cells
labelList neiCellTypes;
syncTools::swapBoundaryCellList(mesh, cellTypes, neiCellTypes);
{
forAll(bphi, patchi)
{
const fvPatchVectorField& Up = U.boundaryField()[patchi];
const fvsPatchScalarField& phip = bphi[patchi];
const labelUList& fc = Up.patch().faceCells();

label start = Up.patch().start();

forAll(fc, i)
{
label facei = start+i;
label celli = fc[i];
label ownType = cellTypes[celli];
label neiType = neiCellTypes[facei-mesh.nInternalFaces()];

bool ownCalc =
(ownType == cellCellStencil::CALCULATED)
&& (neiType == cellCellStencil::INTERPOLATED);

if (ownCalc)
{
// Calculate flux w.r.t. calculated cell
scalar flux = phip[i];

if (flux < 0.0)
{
massIn[zoneID[celli]] -= flux;
}
else
{
}
}
}
}
}

The correction factor for each cell zone is calculated by the following line:

 massCorr[zonei] = massIn[zonei]/adjustableMassOut[zonei];

. It is the ratio between the mass flux going over the boarder of a interpolated cell into a calculated cell to the mass flux leaving the calculated cell into an interpolated cell.

The correction factor massCorr is applied in the final step of the code. Here again a loop over all internal and also boundary faces is performed. The purpose of the correction factors is to adjust the global ratio over a mesh zone of mass flux from interpolated cells into a calculated cells to the mass flux from a calculated cells into an interpolated cells to one.



forAll(own, facei)
{
label zonei = zoneID[own[facei]];   // note:own and nei in same zone

label ownType = cellTypes[own[facei]];
label neiType = cellTypes[nei[facei]];

bool ownCalc =
(ownType == cellCellStencil::CALCULATED)
&& (neiType == cellCellStencil::INTERPOLATED);

bool neiCalc =
(ownType == cellCellStencil::INTERPOLATED)
&& (neiType == cellCellStencil::CALCULATED);

if (ownCalc || neiCalc)
{
// Calculate flux w.r.t. calculated cell
scalar flux = phi[facei];
if (neiCalc)
{
flux = -flux;
}

if (flux < 0.0)
{
phi[facei] /= Foam::sqrt(massCorr[zonei]);
}
else
{
phi[facei] *= Foam::sqrt(massCorr[zonei]);
}
}
}

forAll(bphi, patchi)
{
const fvPatchVectorField& Up = U.boundaryField()[patchi];
fvsPatchScalarField& phip = bphi[patchi];
const labelUList& fc = Up.patch().faceCells();

label start = Up.patch().start();

forAll(fc, i)
{
label facei = start+i;
label celli = fc[i];
label zonei = zoneID[celli];   // note:own and nei in same zone
label ownType = cellTypes[celli];
label neiType = neiCellTypes[facei-mesh.nInternalFaces()];

bool ownCalc =
(ownType == cellCellStencil::CALCULATED)
&& (neiType == cellCellStencil::INTERPOLATED);

bool neiCalc =
(ownType == cellCellStencil::INTERPOLATED)
&& (neiType == cellCellStencil::CALCULATED);

if (ownCalc || neiCalc)
{
// Calculate flux w.r.t. calculated cell
scalar flux = phip[i];
if (neiCalc)
{
flux = -flux;
}

if (flux < 0.0)
{
phip[i] /= Foam::sqrt(massCorr[zonei]);
}
else
{
phip[i] *= Foam::sqrt(massCorr[zonei]);
}
}
}
}



.

## 3 Overset interpolation

In this section we will analyze in more detail how the overset interpolation in OpenFOAM is achieved. Generally OpenFOAM uses an implicit interpolation. An overset interpolation can be written as:

 - $\phi_a = \sum_i \omega_i \phi_{d,i}$ (1)

$\phi_a$ is the value of the variable $\phi$ interpolated at the acceptor cell, $\phi_{d,i}$ are the values of the donor cells and $\omega_i$ are the weights with which the values of the donor cells $\phi_d$ are applied. In an overset mesh, donor and acceptor cells are on disconnected mesh zones. This means that they are not connected over a face and the standard connectivity between cells cannot be used to construct the matrix. In brief, the procedure how to achieve an overset interpolation in OpenFOAM is the following: 1) Extend the existing ldu adressing and 2) insert the coefficients resulting from the overset interpolation in the matrix and 3) solve the linear system.

A brief description of how a matrix is constructed in OpenFOAM using a finite volume discretization can be found here https://openfoamwiki.net/index.php/OpenFOAM_guide/Matrices_in_OpenFOAM.

Let's start to analyze how the ldu addressing in OpenFOAM is extended by the additional connectivity resulting from the overset interpolation. As first step let's recap how the connectivity between cells and the entries in the matrix in OpenFOAM are stored. The connectivity between the cells are stored in two arrays with the same length:


upperAddr()

In the
 lowerAddr()

array the smaller index of the two coupled cells are stored and in the
 upperAddr()

array the higher index of two coupled cells is stored. In OpenFOAM this both arrays have the length equal to the number of faces since only adjacent cells are used for the discretization.

The corresponding entries in the matrix are stored in other two arrays:


lower()
upper()

In order to account for the additional coupling between cell centers introduced by the overset meh approach, new entries in the matrix have to be accomplished. This means that the length of the above four arrays has to be enlarged. In the arrays

upperAddr()

the additional indexes of the cells coupled by the overset interpolation has to be added. In the arrays

lower()

upper()

the coefficients in the matrix resulting from the coupling have to be added. This is done during step when the mesh is updated:

bool changed = mesh.update();


In case of solvers which use the overset method, the mesh instance is of type dynamicOversetFvMesh. The method update can be found in dynamicOversetFvMesh.C:


bool Foam::dynamicOversetFvMesh::update()
{
//if (dynamicMotionSolverFvMesh::update())
if (dynamicMotionSolverListFvMesh::update())
{
// Calculate the local extra faces for the interpolation. Note: could
// let demand-driven lduAddr() trigger it but just to make sure.

// Addressing and/or weights have changed. Make interpolated cells
// up to date with donors
interpolateFields();

return true;
}

return false;
}



(
const labelListList& nbrCells,
label& nExtraFaces,
labelListList& nbrCellFaces,
const globalIndex& globalNumbering,
const labelList& globalCellIDs,
labelListList& localFaceCells,
labelListList& remoteFaceCells
)
{
labelList nProcFaces(Pstream::nProcs(), Zero);

nExtraFaces = 0;
forAll(nbrCells, cellI)
{
const labelList& nbrs = nbrCells[cellI];
forAll(nbrs, nbrI)
{
if (nbrs[nbrI] < nCells)
{
// Local cell
if (triIndex(addr, cellI, nbrs[nbrI]) == -1)
{
nExtraFaces++;
}
}
else
{
label globalNbr = globalCellIDs[nbrs[nbrI]];
label procI = globalNumbering.whichProcID(globalNbr);
nProcFaces[procI]++;
}
}
}

// Create space for extra addressing

// Per processor its local cells we want
localFaceCells.setSize(Pstream::nProcs());
remoteFaceCells.setSize(Pstream::nProcs());
forAll(nProcFaces, procI)
{
localFaceCells[procI].setSize(nProcFaces[procI]);
remoteFaceCells[procI].setSize(nProcFaces[procI]);
}
nProcFaces = 0;

nbrCellFaces.setSize(nbrCells.size());
forAll(nbrCells, cellI)
{
const labelList& nbrs = nbrCells[cellI];
labelList& faces = nbrCellFaces[cellI];
faces.setSize(nbrs.size());

forAll(nbrs, nbrI)
{
label nbrCellI = nbrs[nbrI];

if (nbrCellI < nCells)
{
// Find neighbour cell in owner list
label faceI = triIndex(addr, cellI, nbrCellI);
if (faceI == -1)
{
faceI = nFaces++;
}
faces[nbrI] = faceI;
}
else
{
// Remote cell
faces[nbrI] = -1;

label globalNbr = globalCellIDs[nbrCellI];
label procI = globalNumbering.whichProcID(globalNbr);
label remoteCellI = globalNumbering.toLocal(procI, globalNbr);

label procFaceI = nProcFaces[procI]++;
localFaceCells[procI][procFaceI] = cellI;
remoteFaceCells[procI][procFaceI] = remoteCellI;
}
}
}

// Reorder upper-triangular
labelList oldToNew
(
lduPrimitiveMesh::upperTriOrder
(
)
);

forAll(nbrCellFaces, cellI)
{
inplaceRenumber(oldToNew, nbrCellFaces[cellI]);
}

return oldToNew;
}

The information of the indices of the coupled cells is contained in the cellcellStencil object. It is basically a list of lists. For each index of the interpolated cells it contains the indices of the donor cell. If one knows the pairing index donor cell and acceptor cell, one can update the lduaddrissing. The lower index can be inserted in the lowerAddr array and the higer index can be inserted in the upperAdrr array. This is down by the following code snippet:


if (nbrCellI < nCells)
{
// Find neighbour cell in owner list
label faceI = triIndex(addr, cellI, nbrCellI);
if (faceI == -1)
{
faceI = nFaces++;
}
faces[nbrI] = faceI;
}

### 3.2 Modification of the ldu matrix

After having extended the addressing by the additional connectivity resulting form the overset interpolation, the coefficient resulting from the overset interpolation have to be inserted in the matrix and the matrix have to be solve. The function call very the matrix is solved happens in dynamicOversetFvMesh::solve. The function takes the matrix resulting from the standard finite volume discrcretization (done in the files Ueqn.H, peqn.H a.s.o), adds the coefficients resulting from the overset interpolation and solves the matrix:



template<class Type>
Foam::SolverPerformance<Type> Foam::dynamicOversetFvMesh::solve
(
fvMatrix<Type>& m,
const dictionary& dict
) const
{
typedef GeometricField<Type, fvPatchField, volMesh> GeoField;
// Check we're running with bcs that can handle implicit matrix manipulation
typename GeoField::Boundary& bpsi =
const_cast<GeoField&>(m.psi()).boundaryFieldRef();

bool hasOverset = false;
forAll(bpsi, patchi)
{
if (isA<oversetFvPatchField<Type>>(bpsi[patchi]))
{
hasOverset = true;
break;
}
}

if (!hasOverset)
{
if (debug)
{
Pout<< "dynamicOversetFvMesh::solve() :"
<< " bypassing matrix adjustment for field " << m.psi().name()
<< endl;
}
//return dynamicMotionSolverFvMesh::solve(m, dict);
return dynamicFvMesh::solve(m, dict);
}

if (debug)
{
Pout<< "dynamicOversetFvMesh::solve() :"
<< " adjusting matrix for interpolation for field "
<< m.psi().name() << endl;
}

// Calculate stabilised diagonal as normalisation for interpolation
const scalarField norm(normalisation(m));

if (debug)
{
volScalarField scale
(
IOobject
(
m.psi().name() + "_normalisation",
this->time().timeName(),
*this,
IOobject::NO_WRITE,
false
),
*this,
dimensionedScalar(dimless, Zero)
);
scale.ref().field() = norm;
correctBoundaryConditions
<
volScalarField,
oversetFvPatchField<scalar>
>(scale.boundaryFieldRef(), false);
scale.write();

if (debug)
{
Pout<< "dynamicOversetFvMesh::solve() :"
<< " writing matrix normalisation for field " << m.psi().name()
<< " to " << scale.name() << endl;
}
}

// Switch to extended addressing (requires mesh::update() having been
// called)
active(true);

scalarField oldUpper(m.upper());
scalarField oldLower(m.lower());
FieldField<Field, Type> oldInt(m.internalCoeffs());
FieldField<Field, Type> oldBou(m.boundaryCoeffs());
const label oldSize = bpsi.size();

// Swap psi values so added patches have patchNeighbourField
correctBoundaryConditions<GeoField, calculatedProcessorFvPatchField<Type>>
(
bpsi,
true
);

// Print a bit
//{
//   const fvSolution& sol = static_cast<const fvSolution&>(*this);
//    const dictionary& pDict = sol.subDict("solvers").subDict("p");
//    writeAgglomeration(GAMGAgglomeration::New(m, pDict));
//}

// Use lower level solver
//SolverPerformance<Type> s(dynamicMotionSolverFvMesh::solve(m, dict));
SolverPerformance<Type> s(dynamicFvMesh::solve(m, dict));

// Restore boundary field
bpsi.setSize(oldSize);

// Restore matrix
m.upper().transfer(oldUpper);
m.lower().transfer(oldLower);
m.internalCoeffs().transfer(oldInt);
m.boundaryCoeffs().transfer(oldBou);

active(false);

return s;
}


The actual interpolation is done in the function dynamicOversetFvMesh::addInterpolation:



template<class Type>
(
fvMatrix<Type>& m,
const scalarField& normalisation
) const
{
const cellCellStencilObject& overlap = Stencil::New(*this);
const List<scalarList>& wghts = overlap.cellInterpolationWeights();
const labelListList& stencil = overlap.cellStencil();
const labelList& cellIDs = overlap.interpolationCells();
const scalarList& factor = overlap.cellInterpolationWeight();
const labelUList& types = overlap.cellTypes();

// Force asymmetric matrix (if it wasn't already)
scalarField& lower = m.lower();
scalarField& upper = m.upper();
Field<Type>& source = m.source();
scalarField& diag = m.diag();

// Get the addressing. Note that the addressing is now extended with
// any interpolation faces.
const lduInterfacePtrsList& interfaces = allInterfaces_;

{
FatalErrorInFunction
<< exit(FatalError);
}

inplaceReorder(reverseFaceMap_, upper);
inplaceReorder(reverseFaceMap_, lower);

//const label nOldInterfaces = dynamicMotionSolverFvMesh::interfaces().size();
const label nOldInterfaces = dynamicFvMesh::interfaces().size();

if (interfaces.size() > nOldInterfaces)
{
// Extend matrix coefficients
m.internalCoeffs().setSize(interfaces.size());
m.boundaryCoeffs().setSize(interfaces.size());

for
(
label patchi = nOldInterfaces;
patchi < interfaces.size();
patchi++
)
{
const labelUList& fc = interfaces[patchi].faceCells();
m.internalCoeffs().set(patchi, new Field<Type>(fc.size(), Zero));
m.boundaryCoeffs().set(patchi, new Field<Type>(fc.size(), Zero));
}

//     GeometricField::scalarInterfaces() to get hold of interfaces)
typedef GeometricField<Type, fvPatchField, volMesh> GeoField;

typename GeoField::Boundary& bfld =
const_cast<GeoField&>(m.psi()).boundaryFieldRef();

bfld.setSize(interfaces.size());

// This gets quite interesting: we do not want to add additional
// fvPatches (since direct correspondence to polyMesh) so instead
// add a reference to an existing processor patch
for (label patchi = 0; patchi < nOldInterfaces; patchi++)
{
if (isA<processorFvPatch>(bfld[patchi].patch()))
{
break;
}
}

for
(
label patchi = nOldInterfaces;
patchi < interfaces.size();
patchi++
)
{
bfld.set
(
patchi,
new calculatedProcessorFvPatchField<Type>
(
interfaces[patchi],
m.psi()
)
);
}
}

// 2. Adapt fvMatrix level: faceFluxCorrectionPtr
// Question: do we need to do this?
// This seems to be set/used only by the gaussLaplacianScheme and
// fvMatrix:correction, both of which are outside the linear solver.

// Clear out existing connections on cells to be interpolated
// ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
// Note: could avoid doing the zeroing of the new faces since these
//       are set to zero anyway.

{
{
// Disconnect upper from lower
lower[facei] *= 1.0-factor[celli];
}
{
// Disconnect lower from upper
upper[facei] *= 1.0-factor[celli];
}
}

for (label patchi = 0; patchi < nOldInterfaces; ++patchi)
{
Field<Type>& intCoeffs = m.internalCoeffs()[patchi];
Field<Type>& bouCoeffs = m.boundaryCoeffs()[patchi];
forAll(fc, i)
{
label celli = fc[i];
{
if (types[celli] == cellCellStencil::INTERPOLATED)
{
scalar f = factor[celli];
intCoeffs[i] *= 1.0-f;
bouCoeffs[i] *= 1.0-f;
}
else if (types[celli] == cellCellStencil::HOLE)
{
intCoeffs[i] = pTraits<Type>::zero;
bouCoeffs[i] = pTraits<Type>::zero;
}
}
}
}

// Modify matrix
// ~~~~~~~~~~~~~

// Do hole cells. Note: maybe put into interpolationCells() loop above?
forAll(types, celli)
{
if (types[celli] == cellCellStencil::HOLE)
{
label endLabel = ownerStartAddr[celli + 1];

for (label facei = startLabel; facei < endLabel; facei++)
{
upper[facei] = 0.0;
}

for (label i = startLabel; i < endLabel; i++)
{
lower[facei] = 0.0;
}

diag[celli] = normalisation[celli];
source[celli] = normalisation[celli]*m.psi()[celli];
}
}

forAll(cellIDs, i)
{
label celli = cellIDs[i];

const scalar f = factor[celli];
const scalarList& w = wghts[celli];
const labelList& nbrs = stencil[celli];
const labelList& nbrFaces = stencilFaces_[celli];
const labelList& nbrPatches = stencilPatches_[celli];

if (types[celli] == cellCellStencil::HOLE)
{
FatalErrorInFunction << "Found HOLE cell " << celli
<< " at:" << C()[celli]
<< " . Should this be in interpolationCells()????"
<< abort(FatalError);
}
else
{
// Create interpolation stencil

diag[celli] *= (1.0-f);
source[celli] *= (1.0-f);

forAll(nbrs, nbri)
{
label patchi = nbrPatches[nbri];
label facei = nbrFaces[nbri];

if (patchi == -1)
{
label nbrCelli = nbrs[nbri];

const scalar s = normalisation[celli]*f*w[nbri];

scalar& u = upper[facei];
scalar& l = lower[facei];
if (celli < nbrCelli)
{
diag[celli] += s;
u += -s;
}
else
{
diag[celli] += s;
l += -s;
}
}
else
{

const scalar s = normalisation[celli]*f*w[nbri];
m.boundaryCoeffs()[patchi][facei] += pTraits<Type>::one*s;
m.internalCoeffs()[patchi][facei] += pTraits<Type>::one*s;

// Note: do NOT add to diagonal - this is in the
//       internalCoeffs and gets added to the diagonal
//       inside fvMatrix::solve
}
}

}
}
}


In order to understand what is done let's go step by step through the function. In the first part of the function the information about the cells involved in the overset interpolation and the weights used for the interpolation are retrieved:


const cellCellStencilObject& overlap = Stencil::New(*this);
const List<scalarList>& wghts = overlap.cellInterpolationWeights();
const labelListList& stencil = overlap.cellStencil();
const labelList& cellIDs = overlap.interpolationCells();
const scalarList& factor = overlap.cellInterpolationWeight();
const labelUList& types = overlap.cellTypes();

overlap.cellInterpolationWeights();


returns a list of scalar lists where for each label of interpolated cells index the weights for the interpolation are stored.

overlap.cellStencil();


returns a list of labels where for each index interpolated cells the indeces of donor cells is stored.

overlap.interpolationCells()


returns the indeces of the interpolated cells

overlap.cellInterpolationWeight();


returns 1 for a interpolated cell and 0 for a calculated cell

overlap.cellTypes();


returns the type of cells (interpolated, hole or calculated).

As next point let's shortly recap how OpenFOAM stores the matrix and the indexes of the cells which are coupled in the matrix: lowerAddr stores the smaller index of a cell delimited by a face and upperAddr stores the higher index delimited by the same face. The entries in the two arrays are ordered in terms of face number. The coefficents of the matrix are stored in the arrays lower, upper and diag. diag contains the diagonal components of the matrix. lower() contains the coeffcients of the matrix M[upperAddr[facei]][lowerAddr[facei]] and upper() contain the matrix coefficients M[lowerAddr[facei]][upperAddr[facei]]. The entries of the source term (the right hand side vector) are stored in source.

The actual accounting of the overset interpolation at matrix level happens in two steps. The first step is to clear out the existing connections between the interpolated cells and its neighbors. This is has to be done since in the first step the usual matrix (without overset interpolation) is build using a standard finite volume discretization. This can be seen looking at the source code of e.g. Ueqn.H:


// Solve the Momentum equation

MRF.correctBoundaryVelocity(U);

tmp<fvVectorMatrix> tUEqn
(
fvm::ddt(U) + fvm::div(phi, U)
+ MRF.DDt(U)
+ turbulence->divDevReff(U)
==
fvOptions(U)
);
fvVectorMatrix& UEqn = tUEqn.ref();

UEqn.relax();

fvOptions.constrain(UEqn);

if (pimple.momentumPredictor())
{

fvOptions.correct(U);
}


Here fist the matrix is build using standard finte volume operators (like ddt(U) ore div(phi,U)). At this stage it is not distinquished if the cells are CALCULATED, INTERPOLATED or HOLE. The actual overset interpolation at matrix level is done in the function solve. Since the standard fine volume operators like ddt(U) or div(phi,U) do not know anything about the overset interpolation, the connections between the interpolated cells and the surrounding cells have to be deleted. The connections between the donor and acceptor cells are already accounted in the extention of the ldu adressing. Clearing out the connections between the interpolated cells and its neighbors means that all off-diaganol coefficients of the matrix equation belonging to the interpolated cells are set to zero. That means if a cell with the lable n is an interpolated cells, all off diagonal elements of the nth row of th the original matrix have to be set to zero. The clearing out is done in the following code snipet:


{
{
// Disconnect upper from lower
lower[facei] *= 1.0-factor[celli];
}
{
// Disconnect lower from upper
upper[facei] *= 1.0-factor[celli];
}
}

for (label patchi = 0; patchi < nOldInterfaces; ++patchi)
{
Field<Type>& intCoeffs = m.internalCoeffs()[patchi];
Field<Type>& bouCoeffs = m.boundaryCoeffs()[patchi];
forAll(fc, i)
{
label celli = fc[i];
{
if (types[celli] == cellCellStencil::INTERPOLATED)
{
scalar f = factor[celli];
intCoeffs[i] *= 1.0-f;
bouCoeffs[i] *= 1.0-f;
}
else if (types[celli] == cellCellStencil::HOLE)
{
intCoeffs[i] = pTraits<Type>::zero;
bouCoeffs[i] = pTraits<Type>::zero;
}
}
}
}



In the above code a loop over all faces is performed. First it is checked if the neighour cells of a face (the index of the neighbore cells are stored in the array upperAddr[] ) is of type INTERPOLATED. Remember that the array lower() stores the lower triangle entries of the matrix A according the following order A[upperAddr[facei]][lowerAddr[facei]]. Since we have to set the entries of the row of the matrix A to zero which give a contributio to the equation for the interpolated cell, lower[facei] has to be set to zero. An analogous operation is also performed for the owner cells. In this way the coupling between the interpolated cells and its neighbors coming from the finite volume descretization are irrased.

The next step performed is to acount for the overset interpolation at matrix level. As first step let's recall the relation between the donor and acceptor cells coming from the overset interpolatoin:

 - $\phi_a = \sum_i \omega_i \phi_{d,i}$ (1)

This equation establishes a linear relation between the interpoloted cells and the donor cells. Note that the coupling is only one way: The interpolated cells are influenced by the donor cells but not visa versa. The solution of the donor cells is achieved by standart finite volume discretization. The following code snipet shows how the overset interpolation is achived on matrix level:


forAll(cellIDs, i)
{
label celli = cellIDs[i];

const scalar f = factor[celli];
const scalarList& w = wghts[celli];
const labelList& nbrs = stencil[celli];
const labelList& nbrFaces = stencilFaces_[celli];
const labelList& nbrPatches = stencilPatches_[celli];

if (types[celli] == cellCellStencil::HOLE)
{
FatalErrorInFunction << "Found HOLE cell " << celli
<< " at:" << C()[celli]
<< " . Should this be in interpolationCells()????"
<< abort(FatalError);
}
else
{
// Create interpolation stencil

diag[celli] *= (1.0-f);
source[celli] *= (1.0-f);

forAll(nbrs, nbri)
{
label patchi = nbrPatches[nbri];
label facei = nbrFaces[nbri];

if (patchi == -1)
{
label nbrCelli = nbrs[nbri];

const scalar s = normalisation[celli]*f*w[nbri];

scalar& u = upper[facei];
scalar& l = lower[facei];
if (celli < nbrCelli)
{
diag[celli] += s;
u += -s;
}
else
{
diag[celli] += s;
l += -s;
}
}
else
{
// Patch face. Store in boundaryCoeffs. Note sign change.
//const label globalCelli = globalCellIDs[nbrs[nbri]];
//const label proci =
//    globalNumbering.whichProcID(globalCelli);
//const label remoteCelli =
//    globalNumbering.toLocal(proci, globalCelli);
//
//Pout<< "for cell:" << celli
//    << " need weight from remote slot:" << nbrs[nbri]
//    << " proc:" << proci << " remote cell:" << remoteCelli
//    << " patch:" << patchi
//    << " patchFace:" << facei
//    << " weight:" << w[nbri]
//    << endl;

const scalar s = normalisation[celli]*f*w[nbri];
m.boundaryCoeffs()[patchi][facei] += pTraits<Type>::one*s;
m.internalCoeffs()[patchi][facei] += pTraits<Type>::one*s;

// Note: do NOT add to diagonal - this is in the
//       internalCoeffs and gets added to the diagonal
//       inside fvMatrix::solve
}
}

}
}


The above code starts with a loop over all interpolated cells. The loop starts by retrieving the information required for the interpolation: f is 1 for an interpolated cells and 0 else, the weights w required for the interpolation, the indexes nbrs of the donor cells for each interpolated cells. The list stencilFaces_[celli] containts the position of the donor cells for the cell with the index celli in the extended upperAddr and lowerAddr array and stencilPatches_[celli] contains the id of the neighbor processor if the donor cell lays on a processor boundary.

The loop continous by setting the diagonal and the source to zero. Note that the diagonal will be set to one in the later stage of the loop since the sum of the weights is one. After that the coefficients of the matrix resulting from the overset interpolation are written. The lable facei determines the position in the lower and upper array very the coupling coefficients between donor and acceptor have to be stored. As fininal step the entries in the lower or upper array are written depending on which of the donor or acceptor cell lable is higher.

# 4 Determination of the cell types

In this section we will see how the three different cell types are determined in OpenFOAM.

# 5 References

1. M. Breuer, G. De Nayer, M. Münsch, T. Gallinger, and R. Wüchner, “Fluid–structure interac tion using a partitioned semi-implicit predictor–corrector coupling scheme for the application of large-eddy simulation,” Journal of Fluids and Structures, vol. 29, pp. 107–130, 2012.
2. W. J. Horne and K. Mahesh, “A massively-parallel, unstructured overset method to simulate moving bodies in turbulent flows,” Journal of Computational Physics, vol. 397, p. 108790, 2019.
3. 2019. J. H. Ferziger, M. Perić, and R. L. Street, Computational methods for fluid dynamics. Springer, 2002, vol. 3.
4. ] M. Buchmayr, Development of Fully Implicit Block Coupled Solvers for Incompressible Transient Flows. TU-Graz, 2014.
5. mechanics, vol. 33, no. 1, pp. 445–490, 2001. H. Jasak, “Dynamic mesh handling in openfoam,” in 47th AIAA aerospace sciences meeting including the new horizons forum and aerospace exposition, 2009, p. 341.
6. M. Breuer, G. De Nayer, M. Münsch, T. Gallinger, and R. Wüchner, “Fluid–structure interac tion using a partitioned semi-implicit predictor–corrector coupling scheme for the application of large-eddy simulation,” Journal of Fluids and Structures, vol. 29, pp. 107–130, 2012.
7. mechanics, vol. 33, no. 1, pp. 445–490, 2001. H. Jasak, “Dynamic mesh handling in openfoam,” in 47th AIAA aerospace sciences meeting including the new horizons forum and aerospace exposition, 2009, p. 341.
8. M. Breuer, G. De Nayer, M. Münsch, T. Gallinger, and R. Wüchner, “Fluid–structure interac tion using a partitioned semi-implicit predictor–corrector coupling scheme for the application of large-eddy simulation,” Journal of Fluids and Structures, vol. 29, pp. 107–130, 2012.