Loop-TNR_CUDA.jl

Implementaion of Loop-TNR algorithm with Julia code and CUDA.jl
Author hsp9308
Popularity
6 Stars
Updated Last
10 Months Ago
Started In
March 2022
1. Requirement

- Julia 1.5.3 (Later version: it may not work because of changes in TensorOperations)
- Julia packages : CUDA, LinearAlgebra, SpecialFunctions, TensorOperations
- GPU architechure : at least Pascal or later. (Volta, Turing, ...)

2. How to run

- In main.jl, the parameters (field term hr and hi, temperature br and bi, ...) is used to 
calculate the log of partition function per site with function partition(pars). 

- In loop_tnr.jl, maximum iteration and cut-off in the singular value spectrum, etc. can be modified.

- Run 'julia main.jl' to run the code.

3. What you need to do with other models:

(1) Define a local tensor of your demand: 
  In this code, the local tensor is made with defined functions. (i.e. initial_tensor_XY, initial_tensor_ising)
  So, you may need to make your customized function.

(2) Modify main.jl
  main.jl is edited for only one purpose: finding the leading Lee-Yang zeros. 
  If you want to extract the partition function of a given parameter, you can use a function partition. 
  Or, you can use subfunctions in "loop_tnr.jl" to extract the tensors in each stage of Loop-TNR.