Grid & State
This page documents the core data structures: Grid and State.
Grid Type
The Grid struct contains spatial coordinates, spectral wavenumbers, and parallel decomposition information.
Definition
mutable struct Grid{T, AT}
# Grid dimensions
nx::Int # Number of points in x (horizontal)
ny::Int # Number of points in y (horizontal)
nz::Int # Number of points in z (vertical)
# Domain sizes
Lx::T # Domain size in x
Ly::T # Domain size in y
# Grid spacings
dx::T # Grid spacing in x: dx = Lx/nx
dy::T # Grid spacing in y: dy = Ly/ny
# Vertical grid (staggered, cell-centered)
z::Vector{T} # Vertical levels z[k], size nz
dz::Vector{T} # Layer thicknesses: dz[k] = z[k+1] - z[k], size nz-1
# Spectral wavenumbers
kx::Vector{T} # x-wavenumbers, size nx
ky::Vector{T} # y-wavenumbers, size ny
kh2::AT # kx² + ky² on spectral grid
# MPI decomposition (PencilArrays)
decomp::Any # PencilDecomp or nothing for serial
endType Parameters
T: Floating point type (typicallyFloat64)AT: Array type forkh2(Array{T,2}for serial,PencilArray{T,3}for parallel)
Constructors
# Initialize from parameters (serial mode)
params = default_params(Lx=500e3, Ly=500e3, Lz=4000.0, nx=64, ny=64, nz=32)
grid = init_grid(params)
# Initialize with MPI decomposition
using MPI, PencilArrays, PencilFFTs
MPI.Init()
mpi_config = QGYBJplus.setup_mpi_environment()
grid = QGYBJplus.init_mpi_grid(params, mpi_config)Grid Properties
# Dimensions
nx, ny, nz = grid.nx, grid.ny, grid.nz
# Domain size
Lx, Ly = grid.Lx, grid.Ly
# Grid spacings
dx, dy = grid.dx, grid.dy
# Vertical levels (staggered grid: z runs from -Lz+dz/2 to -dz/2 with dz = Lz/nz)
z = grid.z # Vector of length nz
dz = grid.dz # Vector of length nz-1
# Unstaggered (face) levels used for coefficients: z_face = z .- (grid.Lz / grid.nz) / 2
# Check if parallel
is_parallel = grid.decomp !== nothingWavenumber Access
# Wavenumber vectors (global, same on all processes)
kx = grid.kx # Vector of length nx
ky = grid.ky # Vector of length ny
# Horizontal wavenumber squared
# Serial: 2D array (nx, ny)
# Parallel: 3D PencilArray (local_nz, local_nx, local_ny)
kh2 = grid.kh2
# Convenience functions (handle serial/parallel automatically)
kx_val = get_kx(i_local, grid) # Get kx for local index
ky_val = get_ky(j_local, grid) # Get ky for local index
kh2_val = get_kh2(i, j, k, arr, grid) # Get kh² for local indicesIndex Mapping (Parallel)
When using parallel decomposition, local indices must be mapped to global indices:
# Get local index ranges (xy-pencil / FFT input)
local_range = get_local_range(grid) # (k_range, i_range, j_range)
# Map local to global indices for a given array's pencil
i_global = local_to_global(i_local, 2, field) # x dimension
j_global = local_to_global(j_local, 3, field) # y dimension
k_global = local_to_global(k_local, 1, field) # z dimension
# Map local to global indices (z-pencil)
i_global = local_to_global_z(i_local, 2, grid)
j_global = local_to_global_z(j_local, 3, grid)
# Get local dimensions of any array
nz_local, nx_local, ny_local = get_local_dims(arr)
# Check if array is parallel
is_distributed = is_parallel_array(arr)Decomposition Access
# Access decomposition (parallel mode only)
if grid.decomp !== nothing
decomp = grid.decomp
# Pencil configurations
pencil_xy = decomp.pencil_xy # For horizontal FFTs
pencil_z = decomp.pencil_z # For vertical operations
# Local ranges
range_xy = decomp.local_range_xy
range_z = decomp.local_range_z
# Global dimensions
global_dims = decomp.global_dims # (nz, nx, ny)
# Process topology
topology = decomp.topology # (px, py)
endState Type
The State struct contains all prognostic and diagnostic fields.
Definition
mutable struct State{T, RT<:AbstractArray{T,3}, CT<:AbstractArray{Complex{T},3}}
# Prognostic fields (spectral space, complex)
q::CT # QG potential vorticity
L⁺A::CT # YBJ+ wave envelope (L⁺A where L⁺ = L - k_h²/4)
# Diagnostic fields (spectral space, complex)
psi::CT # Streamfunction (from q via inversion)
A::CT # Wave amplitude (from L⁺A via YBJ+ inversion)
C::CT # Vertical derivative A_z
# Velocity fields (real space, real)
u::RT # Zonal velocity: u = -dψ/dy
v::RT # Meridional velocity: v = dψ/dx
w::RT # Vertical velocity (from omega equation)
# Wave velocity amplitude fields (real space)
LA_real::RT # Real part of wave velocity amplitude (YBJ L operator)
LA_imag::RT # Imaginary part of wave velocity amplitude
endType Parameters
T: Floating point type (Float64)RT: Real array type (Array{T,3}orPencilArray{T,3})CT: Complex array type (Array{Complex{T},3}orPencilArray{Complex{T},3})
Constructors
# Initialize from grid (serial mode)
state = init_state(grid)
# Initialize with MPI (parallel mode)
plans = QGYBJplus.plan_mpi_transforms(grid, mpi_config)
state = QGYBJplus.init_mpi_state(grid, plans, mpi_config)Field Access
# Prognostic fields (time-stepped)
q = state.q # QG potential vorticity (spectral)
L⁺A = state.L⁺A # Wave envelope (spectral), uses Unicode identifier
# Diagnostic fields (computed)
psi = state.psi # Streamfunction (spectral)
A = state.A # Wave amplitude (spectral)
C = state.C # Vertical derivative dA/dz (spectral)
# Velocity fields (real space)
u = state.u # Zonal velocity
v = state.v # Meridional velocity
w = state.w # Vertical velocity
# Wave velocity amplitude (real space)
LA_real = state.LA_real # Real part of LA = L⁺A + (k_h²/4)A
LA_imag = state.LA_imag # Imaginary part of LAWorking with Arrays
# Get underlying data (works for both Array and PencilArray)
psi_data = parent(state.psi)
# Get local dimensions
nz_local, nx_local, ny_local = size(parent(state.psi))
# Access single element
val = state.psi[k, i, j]
# Access slice
top_level = state.psi[end, :, :] # Closest level to the surface
profile = state.psi[:, i, j]
# Set values
state.psi[k, i, j] = complex_value
state.psi[end, :, :] .= top_values
# Copy all of one field
state.psi .= initial_psiPhysical Interpretation
| Field | Symbol | Physical Meaning |
|---|---|---|
q | q | QG potential vorticity: q = ∇²ψ + (f²/N²)∂²ψ/∂z² |
L⁺A | L⁺A | YBJ+ wave envelope: L⁺ = L - k_h²/4 |
psi | ψ | Streamfunction |
A | A | Wave amplitude (recovered via elliptic inversion) |
C | ∂A/∂z | Vertical derivative of wave amplitude |
u | u | Zonal velocity: u = -∂ψ/∂y |
v | v | Meridional velocity: v = ∂ψ/∂x |
w | w | Vertical velocity (from omega equation or YBJ) |
LA_real | Re(LA) | Real part of wave velocity amplitude LA = L⁺A + (k_h²/4)A |
LA_imag | Im(LA) | Imaginary part of wave velocity amplitude |
MPI Workspace
For 2D parallel decomposition, workspace arrays store z-pencil data:
Definition
struct MPIWorkspace{T, PA}
q_z::PA # q in z-pencil configuration
psi_z::PA # psi in z-pencil configuration
L⁺A_z::PA # L⁺A in z-pencil configuration
A_z::PA # A in z-pencil configuration
C_z::PA # C in z-pencil configuration
work_z::PA # General workspace
endConstructor
# Initialize workspace (parallel mode only)
workspace = QGYBJplus.init_mpi_workspace(grid, mpi_config)Usage
# Pass workspace to functions requiring vertical operations
invert_q_to_psi!(state, grid; a=a_vec, workspace=workspace)
invert_L⁺A_to_A!(state, grid, params, a_vec; workspace=workspace)
compute_vertical_velocity!(state, grid, plans, params; workspace=workspace)Allocating Arrays
Serial Mode
# Allocate using grid
q = allocate_field(Float64, grid; complex=true) # Complex spectral
u = allocate_field(Float64, grid; complex=false) # Real physicalParallel Mode
# Allocate in xy-pencil (for FFTs, horizontal operations)
arr_xy = QGYBJplus.allocate_xy_pencil(grid, ComplexF64)
# Allocate in z-pencil (for vertical operations)
arr_z = QGYBJplus.allocate_z_pencil(grid, ComplexF64)
# Allocate FFT backward destination (handles spectral→physical pencil difference)
phys_arr = QGYBJplus.allocate_fft_backward_dst(spectral_arr, plans)FFT Backward Destination Allocation
In 2D MPI decomposition, spectral arrays (FFT output pencil) and physical arrays (FFT input pencil) may have different local dimensions. Use allocate_fft_backward_dst to correctly allocate the destination for fft_backward!:
# Allocate physical-space destination for backward FFT
phys = allocate_fft_backward_dst(spectral_arr, plans)
# Now safe to transform
fft_backward!(phys, spectral_arr, plans)
# Loop over physical array with correct dimensions
nz_phys, nx_phys, ny_phys = size(parent(phys))
for k in 1:nz_phys, j in 1:ny_phys, i in 1:nx_phys
# Access phys[k, i, j]
endUtility Functions
Grid Utilities
# Compute wavenumbers (after changing Lx, Ly)
compute_wavenumbers!(grid)
# Get dealiasing mask
mask = dealias_mask(grid) # 2D Bool array (nx, ny)State Utilities
# Zero all fields
fill!(state.q, 0)
fill!(state.L⁺A, 0)
fill!(state.psi, 0)
# Check for NaN
has_nan = any(isnan, parent(state.psi))Serial vs Parallel Comparison
| Operation | Serial | Parallel |
|---|---|---|
| Grid initialization | init_grid(params) | init_mpi_grid(params, mpi_config) |
| State initialization | init_state(grid) | init_mpi_state(grid, plans, mpi_config) |
| Array type | Array{T,3} | PencilArray{T,3} |
| Index access | Direct arr[k,i,j] | Via parent(arr)[k,i,j] |
| Wavenumber lookup | Direct grid.kx[i] | grid.kx[local_to_global(i,2,grid)] |
grid.decomp | nothing | PencilDecomp struct |
API Reference
Grid Initialization
QGYBJplus.init_grid — Function
init_grid(par::QGParams) -> GridInitialize the spatial grid and spectral wavenumbers from parameters.
Grid Setup
- Horizontal: Uniform grid with spacing dx = Lx/nx, dy = Ly/ny
- Vertical: Uniform staggered grid from -Lz+dz/2 to -dz/2 with spacing dz = Lz/nz
- Domain size (Lx, Ly, Lz) is REQUIRED - specify in meters (e.g., 500e3 for 500 km)
Wavenumber Arrays
Computes kx, ky following FFTW conventions for periodic domain:
kx[i] = (i-1) for i = 1, ..., (nx+1)÷2
(i-1-nx) for i = (nx+1)÷2+1, ..., nxmultiplied by 2π/Lx.
Arguments
par::QGParams: Parameter struct with nx, ny, nz, Lx, Ly, Lz
Returns
Initialized Grid struct with all arrays allocated.
Example
# Domain size is REQUIRED - specify in meters
par = default_params(nx=64, ny=64, nz=32, Lx=500e3, Ly=500e3, Lz=4000.0) # 500km × 500km × 4km
G = init_grid(par)Fortran Correspondence
This matches init_arrays in init.f90.
QGYBJplus.compute_wavenumbers! — Function
compute_wavenumbers!(G::Grid)Recompute wavenumber arrays kx, ky, kh2 if grid parameters changed.
This is useful after modifying grid dimensions or domain size.
Example
G.Lx = 4π # Change domain size
compute_wavenumbers!(G) # Update wavenumbersState Initialization
QGYBJplus.init_state — Function
init_state(G::Grid; T=Float64) -> StateAllocate and initialize a State with all fields set to zero.
Arguments
G::Grid: Grid struct (determines array sizes)T::Type: Floating point type (default Float64)
Returns
State struct with:
- Spectral fields (q, psi, A, B, C): Complex arrays, initialized to 0
- Real fields (u, v, w, LAreal, LAimag): Real arrays, initialized to 0
Example
G = init_grid(par)
S = init_state(G)
# All fields are zero - use init_random_psi! or similar to set ICs
init_random_psi!(S, G, par, plans)See also: State, init_random_psi!
Field allocation is handled internally by init_state. For manual array creation, use standard Julia array allocation or allocate_xy_pencil/allocate_z_pencil for parallel mode.
Index Mapping Functions
QGYBJplus.get_local_range — Function
get_local_range(G::Grid) -> NTuple{3, UnitRange{Int}}Get the local index range for the current process (xy-pencil configuration).
For 2D decomposition, this returns the xy-pencil ranges where z is local and x,y are distributed. Use get_local_range_z for z-pencil configuration.
Returns
- Serial mode:
(1:nz, 1:nx, 1:ny) - Parallel mode: The local range from the xy-pencil decomposition
Example
local_range = get_local_range(grid)
for k in local_range[1], i in local_range[2], j in local_range[3]
# Access data at local indices
endQGYBJplus.local_to_global — Function
local_to_global(local_idx::Int, dim::Int, G::Grid) -> Int
local_to_global(local_idx::Int, dim::Int, arr::AbstractArray) -> IntConvert a local array index to a global index.
For MPI PencilArrays, use local_to_global(local_idx, dim, arr) so the mapping follows the array's pencil decomposition (input/output pencils can differ). For serial arrays, this returns local_idx.
Dimensions are ordered (z, x, y) so dim=1 is z, dim=2 is x, dim=3 is y.
Arguments
local_idx: Local index in the arraydim: Dimension (1, 2, or 3 for z, x, y)G::Grid: Grid with optional decomposition (xy-pencil mapping)arr: Array to infer the local→global mapping (preferred for MPI)
Returns
Global index for wavenumber lookup.
Example
for j_local in axes(ψk, 3), i_local in axes(ψk, 2)
i_global = local_to_global(i_local, 2, ψk)
j_global = local_to_global(j_local, 3, ψk)
kx = grid.kx[i_global]
ky = grid.ky[j_global]
endQGYBJplus.get_local_dims — Function
get_local_dims(arr) -> Tuple{Int, Int, Int}Get the local dimensions of an array (works for both Array and PencilArray).
QGYBJplus.get_kx — Function
get_kx(i_local::Int, G::Grid) -> RealGet the x-wavenumber for a local index, handling both serial and parallel cases.
QGYBJplus.get_ky — Function
get_ky(j_local::Int, G::Grid) -> RealGet the y-wavenumber for a local index, handling both serial and parallel cases.
QGYBJplus.get_kh2 — Function
get_kh2(i_local::Int, j_local::Int, k_local::Int, arr, G::Grid) -> RealGet horizontal wavenumber squared for local indices.
For serial mode, accesses G.kh2 directly. For parallel mode, accesses the local PencilArray element.
FFT Array Allocation
QGYBJplus.allocate_fft_backward_dst — Function
allocate_fft_backward_dst(spectral_arr, plans)Allocate a destination array for fft_backward! that is on the correct pencil.
For MPI plans with inputpencil, allocates on inputpencil (physical space). For serial plans, uses similar() which works correctly.
This is the centralized helper function for all modules to use when allocating arrays as destinations for fft_backward!.
Arguments
spectral_arr: A spectral-space array to use as template for element typeplans: FFT plans (MPIPlans or serial)
Returns
An array allocated on the correct pencil for fft_backward! destination.