A Julia implementation of the FeastKit eigenvalue solver for finding eigenvalues and eigenvectors of large-scale eigenvalue problems within a specified region.
FeastKit.jl is a pure Julia translation of the original FEAST library. FEAST is a numerical algorithm for solving both standard and generalized eigenvalue problems by computing eigenvalues located inside a given region in the complex plane.
Float64/ComplexF64 and Float32/ComplexF32using Pkg
Pkg.add("FeastKit")
using FeastKit
using LinearAlgebra
# Create a test matrix
n = 100
A = diagm(-1 => -ones(n-1), 0 => 2*ones(n), 1 => -ones(n-1))
# Find eigenvalues in the interval [0.5, 1.5]
result = feast(A, (0.5, 1.5), M0=10)
println("Found $(result.M) eigenvalues")
println("Eigenvalues: ", result.lambda)
# For generalized problem Ax = λBx
B = diagm(0 => ones(n))
result = feast(A, B, (0.5, 1.5), M0=10)
using SparseArrays
# Create sparse matrix
A_sparse = spdiagm(-1 => -ones(n-1), 0 => 2*ones(n), 1 => -ones(n-1))
result = feast(A_sparse, (0.5, 1.5), M0=10)
# For general complex matrices, use circular search region
A_complex = randn(ComplexF64, 50, 50)
B_complex = Matrix{ComplexF64}(I, 50, 50)
# Search in circle centered at origin with radius 2
center = 0.0 + 0.0im
radius = 2.0
result = feast_general(A_complex, B_complex, center, radius, M0=15)
# Initialize Feast parameters
fpm = feastinit()
# Customize parameters
feast_set_defaults!(fpm.fpm,
print_level=1,
integration_points=16, # More integration points
tolerance_exp=14, # Higher precision
max_refinement=30) # More refinement loops
# Use custom parameters
result = feast(A, (0.5, 1.5), M0=10, fpm=fpm.fpm)
# For banded matrices stored in LAPACK format
n = 100
k = 2 # Number of super-diagonals
A_banded = zeros(k+1, n)
# ... fill A_banded with appropriate values ...
result = feast_banded(A_banded, k, (0.5, 1.5), M0=10)
MPI wrappers mirror the pdfeast_* routines in the original FEAST library.
They are only loaded when MPI.jl is available and FEASTKIT_ENABLE_MPI=true
is set in the environment:
export FEASTKIT_ENABLE_MPI=true
mpiexec -n 4 julia --project -e 'using FeastKit; println(FeastKit.mpi_available())'
Once enabled you can call the MPI interfaces directly:
if FeastKit.mpi_available()
using MPI
MPI.Init()
A = Matrix{Float64}(I, 50, 50)
B = Matrix{Float64}(I, 50, 50)
fpm = zeros(Int, 64)
feastinit!(fpm)
FeastKit.mpi_feast_sygv!(A, B, 0.0, 2.0, 16, fpm)
MPI.Finalize()
end
# Define matrix-vector operations
function A_mul!(y, x)
# Implement y = A*x
y .= A * x
end
function B_mul!(y, x)
# Implement y = B*x
y .= B * x
end
# Use matrix-free interface
result = feast_matvec(A_mul!, B_mul!, n, (0.5, 1.5), M0=10)
FeastKit.jl supports parallel computation where each contour integration point is solved independently, leading to significant speedups for large problems.
# Enable parallel computation using threads
result = feast(A, (0.5, 1.5), M0=10, parallel=:threads)
# Or use the explicit parallel interface
result = feast_parallel(A, B, (0.5, 1.5), M0=10, use_threads=true)
using Distributed
# Add worker processes
addprocs(4)
# Use distributed computing for contour integration
result = feast(A, (0.5, 1.5), M0=10, parallel=:distributed)
FeastKit.jl provides full MPI support for high-performance computing clusters:
using MPI
using FeastKit
# Initialize MPI (if not already done)
MPI.Init()
# Basic MPI FeastKit
result = feast(A, B, (0.5, 1.5), M0=10, parallel=:mpi)
# Explicit MPI interface with communicator
comm = MPI.COMM_WORLD
result = mpi_feast(A, B, (0.5, 1.5), M0=10, comm=comm)
# Hybrid MPI + threading (best for modern HPC)
result = feast_hybrid(A, B, (0.5, 1.5), M0=10,
comm=comm, use_threads_per_rank=true)
Create a job script for SLURM/PBS:
#!/bin/bash
#SBATCH --nodes=4
#SBATCH --ntasks-per-node=8
#SBATCH --cpus-per-task=4
#SBATCH --time=01:00:00
module load julia mpi
# Run with MPI + threading
mpirun -np 32 julia --threads=4 feast_mpi_example.jl
Julia script (feast_mpi_example.jl):
using MPI
using FeastKit
using LinearAlgebra
MPI.Init()
# Create distributed problem
comm = MPI.COMM_WORLD
rank = MPI.Comm_rank(comm)
size = MPI.Comm_size(comm)
if rank == 0
println("Running FeastKit on $size MPI processes with $(Threads.nthreads()) threads each")
end
# Large eigenvalue problem
n = 10000
A = spdiagm(-1 => -ones(n-1), 0 => 2*ones(n), 1 => -ones(n-1))
B = sparse(I, n, n)
# Hybrid MPI + threading execution
result = feast_hybrid(A, B, (0.5, 1.5), M0=20,
comm=comm, use_threads_per_rank=true)
if rank == 0
println("Found $(result.M) eigenvalues")
println("Computation time measured on each rank")
end
MPI.Finalize()
For advanced users, a parallel RCI interface is available:
using FeastKit
# Create parallel state
state = ParallelFeastState{Float64}(ne=8, M0=10, use_parallel=true, use_threads=true)
# Initialize workspace
N = size(A, 1)
work = randn(N, M0)
workc = zeros(ComplexF64, N, M0)
Aq = zeros(M0, M0)
Sq = zeros(M0, M0)
lambda = zeros(M0)
q = zeros(N, M0)
res = zeros(M0)
# RCI loop
while true
pfeast_srci!(state, N, work, workc, Aq, Sq, fpm,
Emin, Emax, M0, lambda, q, res)
if state.ijob == Feast_RCI_PARALLEL_SOLVE.value
# Solve all contour points in parallel
pfeast_compute_all_contour_points!(state, A, B, work, M0)
elseif state.ijob == Feast_RCI_MULT_A.value
# Compute A*q for residual calculation
M = state.mode
work[:, 1:M] .= A * q[:, 1:M]
elseif state.ijob == Feast_RCI_DONE.value
break
end
end
# FeastKit automatically selects the best available backend
result = feast(A, B, (0.5, 1.5), M0=10, parallel=:auto)
# Manual backend selection
result = feast(A, B, (0.5, 1.5), M0=10, parallel=:mpi) # Force MPI
result = feast(A, B, (0.5, 1.5), M0=10, parallel=:threads) # Force threading
result = feast(A, B, (0.5, 1.5), M0=10, parallel=:serial) # Force serial
FeastKit uses contour integration in the complex plane to compute eigenvalues. The key steps are:
The algorithm is particularly effective for:
FeastKit returns a FeastResult object containing:
struct FeastResult{T<:Real, VT}
lambda::Vector{T} # Computed eigenvalues
q::Matrix{VT} # Computed eigenvectors
M::Int # Number of eigenvalues found
res::Vector{T} # Residual norms
info::Int # Exit status (0 = success)
epsout::T # Final residual
loop::Int # Number of refinement loops
end
info = 0: Successful convergenceinfo = 1: Invalid matrix sizeinfo = 2: Invalid search subspace sizeinfo = 3: Invalid search intervalinfo = 5: No convergence achievedinfo = 6: Memory allocation errorinfo = 8: LAPACK errorparallel=:auto to automatically select the best backendparallel=:mpi or feast_hybrid() for optimal cluster performanceThis is a Julia translation of the original FEAST library. Some features from the original may not be fully implemented:
Contributions are welcome! Please feel free to submit issues, feature requests, or pull requests.
This project is licensed under the same terms as the original Feast library.