Discepancies between serial and parallel build of CUPyDO
Created by: acrovato
Context
I build CUPyDO without MPI on my computer to quickly run my tests before commiting. I also build CUPyDO in parallel on a test machine (gaston) to 1) ensure compatibility with parallel build of SU2, 2) test this feature of CUPyDO.
Issue
I noticed somewhat large discrepancies in the results of a simple test case (airfoil in a steady flow attached to vertical rotational springs) between a serial and a parallel build of CUPyDO. I ensured that it was not linked to the machine by building CUPyDO in serial on the test machine. Here is the output produced by the test machine (serial runs). The test case is Flow_RBM/staticAirfoil_fsi.py. It is solved with the IQN algorithm. The structural and fluid meshes are matching. Serial build:
FSI residual: 5.81484980254e-07
FSI iterations: 12
[CTest] Lift coefficient = 0.773380 (expected 0.774016 +/- 5.0%)
rel diff = 8.211480e-04 <= 5.000000e-02 [ok]
[CTest] Vertical displacement = 0.154501 (expected 0.154490 +/- 0.010000)
abs diff = 1.130814e-05 <= 1.000000e-02 [ok]
[CTest] Rotational displacement = 3.441070 (expected 3.521739 +/- 0.500000)
abs diff = 8.066920e-02 <= 5.000000e-01 [ok]
Parallel build:
FSI residual: 9.07097320559e-07
FSI iterations: 16
[CTest] Lift coefficient = 0.708691 (expected 0.774016 +/- 5.0%)
rel diff = 8.439698e-02 > 5.000000e-02 [wrong!]
[CTest] Vertical displacement = 0.142204 (expected 0.154490 +/- 0.010000)
abs diff = 1.228569e-02 > 1.000000e-02 [wrong!]
[CTest] Rotational displacement = 2.899969 (expected 3.521739 +/- 0.500000)
abs diff = 6.217705e-01 > 5.000000e-01 [wrong!
Questions
- Is there a difference in the data treatment when building with MPI? More specifically, are the data managed by petsc even if CUPyDO is built with, but run without MPI? If so, is it not better to always use petsc to avoid such discrepancies?