Skip to content
Snippets Groups Projects

Feature eigen

Merged Adrien Crovato requested to merge feature_eigen into master

Context

This MR replaces gmm by Eigen.
Eigen is newer, easier to use, has a larger community and could possibly be faster than gmm. Furthermore, Eigen has an interface to the Intel's Pardiso solver, which might be more efficient than MUMPS.

Tasks

For now, the code has merely been translated, but not really refactored nor optimized.
To-do list:

  • replace gmm::matrix<double> by Eigen::Matrix<double>
  • replace std::vector<double> by Eigen::(Row)Vector<double>, where applicable and useful
  • refactor elementary matrices computation (mainly tbox::Element:build() methods), especially clean the thermo-mechanical matrices
  • change method return type, ie void build(Matrix const &A, Matrix &K) should become Matrix K build(Matrix const &A), where applicable (mainly tbox::Element)
  • replace tbox::Pt by Eigen::VectorXd
  • investigate Eigen multithreading feature
  • investigate Pardiso interface

The unchecked tasks will be dealt with in the next MR.

Issues

Eigen uses the method of co-factors to inverse a matrix, whose size is fixed and up to 4x4.For now, the matrices in Mem are dynamically sized, but there should be a way to force Eigen to use the method of co-factors for MemTri3, MemQuad4 and MemTetr4. This could partly address #27.

Notes

I replaced std::vector<double> by Eigen::VectorXd where those vectors were used for linear algebra. There are some exceptions, check the comments in my commits.
The main exception is that some vectors are public and hold variables and results. I decided to keep std::vector for such cases in order to have more flexibility when interfacing in python (SWIG already already fully interfaces std::vector). For linear algebra on the C++ side, these std::vector are mapped with Eigen::Map<Eigen::Vector>, which holds a reference to std::vector to keep it up to date, and can be manipulated as Eigen::Vector.

Additional changes

  • Git LFS support
  • Force to look for MKL before looking for other BLAS
  • Minor changes in flow solver

Tests

Passed on ubuntu18.04 (python 2.7.15), debian4.9 (python 2.7.13), msys2 (python 3.8.1).
@R.Boman and @Kim.Liegeois: Please double check that the solvers you implemented still give accurate results, particularly using multithreading. The battery is passing, but I did not run and check each test individually (only some of them)!

Edited by Adrien Crovato

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
101 99 int avC = 0; // adaptive viscosity counter
102 100 bool rUpwd = true; // update best upwind element
103 101
102 // Setup linear (inner) solver MUMPS
103 std::map<std::string, int> mumpsOpt;
104 MumpsInterface mumps(mumpsOpt);
105
106 // Residual and change in solution
107 std::vector<double> dPhi(phi.size()); // dummy dPhi
108 Eigen::Map<Eigen::VectorXd> rPhi_(rPhi.data(), rPhi.size()), dPhi_(dPhi.data(), dPhi.size());
  • Adrien Crovato
    Adrien Crovato @acrovato started a thread on commit e50be0d8
  • 185 181 * @brief Build LHS matrix and RHS vector for Picard iteration
    186 182 * @authors Adrien Crovato
    187 183 */
    188 void Picard::build(gmm::csr_matrix<double> &A, std::vector<double> &b)
    184 void Picard::build(Eigen::SparseMatrix<double, Eigen::RowMajor> &A, std::vector<double> &b)
  • Adrien Crovato
    Adrien Crovato @acrovato started a thread on commit e50be0d8
  • 57 57
    58 58 self.solver = h.Solver(self.pbl)
    59 59 self.solver.nthreads = 1
    60 self.solver.restol = 1e-11
    60 self.solver.restol = 1e-10
  • Adrien Crovato
    Adrien Crovato @acrovato started a thread on commit e50be0d8
  • 12 12
  • Adrien Crovato
    Adrien Crovato @acrovato started a thread on commit e50be0d8
  • 68 68 # -- Eigen --
    69 69 FIND_PACKAGE(EIGEN REQUIRED)
    70 70 TARGET_INCLUDE_DIRECTORIES(tbox PUBLIC ${EIGEN_INCLUDE_DIRS})
    71 TARGET_COMPILE_DEFINITIONS(tbox PUBLIC EIGEN_DONT_PARALLELIZE)
    72 IF(MKL_INCLUDE_DIRS AND MKL_LIBRARIES)
    73 MESSAGE(STATUS "Linking Eigen with MKL")
    74 TARGET_COMPILE_DEFINITIONS(tbox PUBLIC EIGEN_USE_MKL_ALL)
    75 ELSE()
    76 MESSAGE(STATUS "Linking Eigen with ${BLA_VENDOR}")
    77 TARGET_COMPILE_DEFINITIONS(tbox PUBLIC EIGEN_USE_BLAS)
    78 #TARGET_COMPILE_DEFINITIONS(tbox PUBLIC EIGEN_USE_LAPACKE) # why this line fails?
  • Adrien Crovato
    Adrien Crovato @acrovato started a thread on commit e50be0d8
  • 60 60 public:
    61 61 MumpsInterface(std::map<std::string, int> const &opts);
    62 62
    63 void setSoE(gmm::csr_matrix<double> const &A, std::vector<double> const &b);
    63 void setSoE(Eigen::SparseMatrix<double> const &A, Eigen::Map<Eigen::VectorXd> const &b);
    • Author Maintainer

      Map instead of Vector to be consistent with Solver classes in modules. Is there a template class that superseeds both Map and Vector?

    • If you want the method to accept both std::vector and Eigen::VectorXd types, couldn't you use Eigen::VectorXd const & as argument?

      • The call with a Eigen::VectorXd would be setSoE(A, b)
      • The call with a std::vector b would be setSoE(A, Eigen::Map<Eigen::VectorXd>(b)) In this way, the conversion is explicit during the call and it is made outside the implementation of the called class.

      If it is possible to do so, it is cleaner because we don't need to know what happens somewhere else in the code when we try to understand the types of the arguments of this method. The method also become compatible with a classical Eigen::VectorXd object (which is the type that we expect here).

      Edited by Boman Romain
    • Author Maintainer

      What I would have liked is a b as Eigen::Vector or as Eigen::MapEigen::Vector, because b can be:

      • the RHS of a Picard iteration soe, which we are not really interested in (thus could be a temporary Vector)
      • the RHS of a Newton iteration soe, ie, the residual, which were are interested in (thus has to be a Map)

      However, I see no way of doing that...

    • The question will be solved if we find an easy way to wrap Eigen::Vectors in python.

    • Author Maintainer

      I'll look into it... in the future!

    • Please register or sign in to reply
  • Adrien Crovato
    Adrien Crovato @acrovato started a thread on commit e50be0d8
  • 86 86 nt++;
    87 87
    88 88 // equation #1
    89 gmm::add(u1, gmm::scaled(v1, dt), u1); // u1 = u0 + dt*v0
    89 u1_ += v1_ * dt; // u1 = u0 + dt*v0
    90 90
    91 91 for (auto s : pbl->srcs)
    92 92 s->apply(t, u1);
    93 93
    94 94 // equation #2
    95 gmm::mult(K, u1, tmp1);
    96 gmm::mult(S, v1, tmp2);
    97 for (size_t i = 0; i < gmm::vect_size(Md); ++i)
    98 v1[i] = v1[i] - dt / Md[i] * (tmp1[i] + tmp2[i]);
    95 v1_.array() -= dt * (K * u1_ + S * v1_).array() / Md_.array();
  • Adrien Crovato
    Adrien Crovato @acrovato started a thread on commit e50be0d8
  • 34 34 class FLOW_API FpeLSFunction : public tbox::LSFunction
    35 35 {
    36 36 private:
    37 flow::Newton &solver; ///< solver object (must be of Newton type)
    38 std::vector<double> &u; ///< solution
    39 std::vector<double> u0; ///< initial solution
    40 std::vector<double> du; ///< change in solution
    41 std::vector<double> &fRes; ///< residual vector
    37 flow::Newton &solver; ///< solver object (must be of Newton type)
    38 std::vector<double> &u; ///< solution
    39 std::vector<double> u0; ///< initial solution
    40 Eigen::Map<Eigen::VectorXd> du; ///< change in solution
    • Author Maintainer

      Map instead of Vector to be consistent with Solver classes in modules. Is there a template class that superseeds both Map and Vector?

    • It looks that it is rather simple to convert a std::vector into an Eigen::VectorXd using a Map. The opposite is impossible (std::vector cannot "share" its memory). Thus, I don't see any (clean) way to accept both std::vector and Eigen::VectorXd in this case.

      Edited by Boman Romain
    • Please register or sign in to reply
  • Author Maintainer

    I could not add reviewers (maybe because of the WIP status).

  • Boman Romain marked the checklist item refactor elementary matrices computation (mainly tbox::Element:build() methods), especially clean the thermo-mechanical matrices as completed

    marked the checklist item refactor elementary matrices computation (mainly tbox::Element:build() methods), especially clean the thermo-mechanical matrices as completed

  • Boman Romain marked the checklist item refactor elementary matrices computation (mainly tbox::Element:build() methods), especially clean the thermo-mechanical matrices as incomplete

    marked the checklist item refactor elementary matrices computation (mainly tbox::Element:build() methods), especially clean the thermo-mechanical matrices as incomplete

  • Adrien Crovato added 1 commit

    added 1 commit

    Compare with previous version

  • Adrien Crovato
    Adrien Crovato @acrovato started a thread on commit 83e47124
  • 348 348 // first grab upper row values...
    349 349 std::vector<Eigen::Triplet<double>> tUp;
    350 350 tUp.reserve(pbl->msh->nodes.size() / 10);
    351 tbb::spin_mutex::scoped_lock lock(mutex);
    • Author Maintainer

      I am not sure why I need to block here... All idxUp and idxLw are different, so there should not be any thread conflicts... @R.Boman any ideas what I am missing?

    • I guess that the matrix is structurally modified by this code. How does adding a zero is handled internally? I don't know, and even if it does not modify the internal structure of the matrix today, can you make this assumption for the future version of Eigen?

      Check the structure of the matrix before and after this loop. It should be the same.

      You could also add a debug loop before that, which would check whether all the indices that you access have been previously allocated (and are still allocated after the loop).

      Note: I would add : assert(tUp.size()<= pbl->msh->nodes.size() / 10) after the loop to make the assumption explicit. Because, as you know, this could be also a problem.

    • Author Maintainer

      I guess that the matrix is structurally modified by this code.

      I came up with the same conclusion, but only existing coefficients are set to 0, which should not modify the matrix structure. Only the coeffRef method might insert elements and modify the memory.

      Note: I would add : assert(tUp.size()<= pbl->msh->nodes.size() / 10) after the loop to make the assumption explicit. Because, as you know, this could be also a problem.

      I can add that. However, I doubt that this is the issue since the number of entries in tUp will be equal to the the number of nodes that are connected to the node at ìdxUp` which will be much smaller than the problem size.

    • Please register or sign in to reply
  • Adrien Crovato
    Adrien Crovato @acrovato started a thread on commit 3c14d4b0
  • 188 188 # check results
    189 189 print(ccolors.ANSI_BLUE + 'PyTesting...' + ccolors.ANSI_RESET)
    190 190 tests = CTests()
    191 tests.add(CTest('solver: iteration count', solver.nIt, 5, 1, forceabs=True))
    192 tests.add(CTest('solver_ref: iteration count', solver.nIt, 5, 1, forceabs=True))
  • Adrien Crovato changed the description

    changed the description

  • Adrien Crovato added 3 commits

    added 3 commits

    • 3c14d4b0 - Added iteration counter to tests, to increase robustness
    • 10a8050f - Corrected mistake
    • 0cf68e6c - Merge branch 'adrien' into feature_eigen

    Compare with previous version

  • Adrien Crovato added 1 commit

    added 1 commit

    • ebaa8863 - Removed parallel loop and increased test tolerance for gmsh3.

    Compare with previous version

  • Adrien Crovato changed the description

    changed the description

  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
  • Please register or sign in to reply
    Loading