The Plasma Science and Innovation Center

PSI - Center

Codes

The PSI-Center is concentrating its efforts on two major extended MHD codes, NIMROD and HiFi, as well as other developmental/testing codes.

NIMROD

The Non Ideal Magnetohydrodynamics with Rotation, Open Discussion code, NIMROD,, is a macroscopic simulation code that solves compressible nonlinear magneto-fluid equations with electric-field terms selected to represent either the non-ideal single-fluid MHD model or two-fluid models of magnetized plasmas. The NIMROD code is 3D, using spectral finite elements to represent a non-periodic plane and finite Fourier series for the periodic direction. The finite elements provide a flexible description of the poloidal plane, so NIMROD is suitable for a variety of ICC configurations. The code also has flexibility in the polynomial degree of the basis functions, so that spatial convergence can be achieved through the most efficient combination of mesh resolution and basis function order. High-order basis functions have proven important for simulating the extremely anisotropic thermal transport associated with magnetized plasma1, particularly when the magnetic field is not aligned with the computational mesh, as is common in simulations of dynamic ICC experiments. It is also important for distinguishing parallel and perpendicular forces and for maintaining small numerical divergence error in the magnetic field.

The most important kinetic effects for macroscopic dynamics result from particle mean-free-paths being large in plasma of moderate to high temperature. The NIMROD Team (the USU group, in particular) has analytically and computationally developed integro-differential and continuum velocity-space methods for closing fluid moment equations with parallel-kinetic effects. The USU group has also developed a very general hierarchy of moment equation for incorporating kinetic effects, as discussed below. The kinetic computations include quantitative particle-collision operators and are valid over a range from short-mean-free paths to collisionless behavior. This is important for modeling ICC experiments where there is strong interaction between confined high-temperature plasma and cooler edge plasma.

NIMROD also models kinetic effects from energetic minority ions with a δƒ PIC method. A δƒ pressure moment is calculated and added to the MHD momentum equation. This minimal modification to the MHD equations is valid in the limit of nHOT << nBULK but with n_HOT ~n_BULK.

HiFi

The PSI-Center is continuing to develop the 3D high order finite (spectral) element code framework HiFi, based on its previously developed 2D version also known as SEL. The distinguishing capabilities of the code include fully 3D adaptive spectral element spatial representation with flexible multi-block geometry (under development), highly parallelizable implicit time advance, and general flux-source form of the PDEs, and boundary conditions that can be implemented in its framework. The 2D SEL code has been extensively validated and used for simulations of various multi-fluid plasma physics phenomena, including magnetic reconnection, cylindrical tokamak sawtooth oscillations, and FRC translation. HiFi uses the hierarchical data format (HDF5) for parallel data I/O and a separate post-processing code for data analysis and generation of data files readable by the 3D visualization software VisIt (described later). HiFi can currently read single-block grids generated by the Sandia CUBIT meshing program (Cubit). Among recent additions to the physics modeling capabilities of HiFi are plasma-neutral interactions with a static neutral background, the Spitzer-Chodura resistivity model, the full Braginskii magnetic field dependent anisotropic heat conduction tensor, and the 3D Hall MHD model. Present development efforts include self-consistent boundary condition formulations for open and insulated-conductor boundaries, extension to the multi-block formulation, and implementation of physics-based preconditioning for better parallel scalability. Additionally, HiFi has been and continues to be used for studying fundamental numerical properties of the high order finite element spatial discretization method. In particular, these include solving systems of PDEs with an extreme degree of anisotropy and the numerical accuracy of computations on highly distorted logically hexahedral structured grids.

Scalable Parallel Solvers

Parallel solution of a linear system is called scalable if simultaneously doubling the number of dependent variables and the number of processors results in little or no increase in the computation time to solution. Multiple length scales characteristic of extended MHD lead to the need for high-order spatial representation and adaptive grids. Multiple time scales lead to the need for implicit time steps and the resulting requirement of efficient parallel solution of large, sparse linear systems. Scalability is essential for the efficient use of current and future generations of parallel supercomputers with 104-105 processors and petaflop speeds. Most extended MHD codes are not scalable, and as a result are limited to a maximum of a few thousand processors that can be used efficiently.

We have developed a new method to achieve scalability for extended MHD modeling of fusion plasmas. The heart of the approach is physics-based preconditioning, pioneered by Luis Chacón. The physical dependent variables are partitioned into two sets. The first set, e.g. density, pressure, and fields, is eliminated algebraically in terms of the second set, e.g. momentum densities, reducing the order and condition number increasing the diagonal dominance of the matrices to be solved. The resulting Schur complement matrix is approximated and simplified by interchanging the order of substitution and spatial discretization. This procedure leads to an effective preconditioner, which accelerates the convergence of a final nonlinear Newton-Krylov solver on the full system of equations. Approximations to the Schur complement affect only the rate of convergence, not the accuracy of the final solution. Solution of the reduced matrices uses additive Schwarz blockwise LU-preconditioned FGMRES, with many variations available through the PETSc library. The code is organized into a general-purpose solver and a separate, problem-dependent application module, which can adapt the method to any system of flux-source equations.

Weak parallel scaling tests on the NERSC Franklin Cray XT-4 computer up to 2,048 cores have shown partial scalability up for wall time and memory usage for visco-resistive MHD, based on the well-known GEM Challenge magnetic reconnection problem.

Back to Main PSI-Center Page

Utah State University