[feature] Create Benchmarking scripts for GRASSIM
Summary
Creating a new script to benchmark Grassim model modification.
Current behavior
Currently no script to benchmark Grassim.
Desired behavior
The objective is to create a benchmark script to generate and save results with the current version. Ideally, the commit ID should be included in the metadata of the saved file, so that it can be traced back to the version with which it was generated. It would look like main.py or example.py, with a ‘save results’ section at the end (the save_csv function can be used). It would be in a "Benchmarking" file on the Gitlab.
Linked features or branches
Similar to branch #137 (closed) creating benchmarking scripts for light. Modification of the model are done in branch #142
Edited by Julien Philippart