diff --git a/latex/ReplyRef1.tex b/latex/ReplyRef1.tex
index 233df5defdafcf1d9eeda48fe0d59276835798d7..738be7fa3a22a1f03c7bbf6f9f1c28a5b809669f 100644
--- a/latex/ReplyRef1.tex
+++ b/latex/ReplyRef1.tex
@@ -3,163 +3,325 @@
 \usepackage{graphicx}
 \usepackage{lmodern}
 \usepackage{xcolor}
-\author{C. Troupin}
-\usepackage[margin=1in]{geometry}
+\usepackage{natbib}
+\PassOptionsToPackage{hyphens}{url}\usepackage{hyperref}
+\usepackage[hyphenbreaks]{breakurl}
+\usepackage{doi}
+\renewcommand{\familydefault}{\sfdefault}
 
+\setlength{\parindent}{0cm}
+\usepackage[margin=1in]{geometry}
+\usepackage[most]{tcolorbox}
+
+\newtcolorbox{SimpleBox}[2][]{%
+  breakable,
+  enhanced,
+  colback=red!0!white,
+  colframe=LightGray,
+  title=#2
+}
 
-\newcommand{\comment}[1]{\textcolor{black}{#1}}
-\newcommand{\reply}[1]{\textcolor{ForestGreen}{#1}}
-\newcommand{\newtext}[1]{\textcolor{ForestGreen}{\it #1}}
+\hypersetup{
+    bookmarks=true,         % show bookmarks bar?
+    unicode=false,          % non-Latin characters in Acrobat’s bookmarks
+    pdftoolbar=true,        % show Acrobat’s toolbar?
+    pdfmenubar=true,        % show Acrobat’s menu?
+    pdffitwindow=false,     % window fit to page when opened
+    pdfstartview={FitH},    % fits the width of the page to the window
+    pdftitle={Reply to Reviewer 1},    % title
+    pdfauthor={Charles Troupin},     % author
+    pdfsubject={The AlborEX dataset},   % subject of the document
+    pdfkeywords={keyword1, key2, key3}, % list of keywords
+    colorlinks=true,       % false: boxed links; true: colored links
+    linkcolor=SlateGrey,          % color of internal links (change box color with linkbordercolor)
+    citecolor=SlateGrey,        % color of links to bibliography
+    filecolor=SlateGrey,      % color of file links
+    urlcolor=SlateGrey           % color of external links
+}
 
+\newenvironment{reply}{\color{DarkGreen}}{\ignorespacesafterend}
+%\newenvironment{newtext}{\color{ForestGreen}\rule{1cm}{2pt}\it}{\ignorespacesafterend}
+\renewcommand{\refname}{Additional references}
+
+\newtcolorbox{newtext}{
+        colframe=SlateGray,
+        colback =white,
+        top=0mm, bottom=0mm, left=0mm, right=0mm,
+        arc=0mm,
+        fontupper=\color{DarkGreen},
+        fonttitle=\color{white},
+        title=Added text:
+                        }
+
+\parskip .2cm
+
+\title{Interactive  comment  on  "The AlborEX dataset: sampling of submesoscale features in the Alboran Sea"}
+\author{Anonymous Referee \#1}
+\date{}
 \begin{document}
 
+\maketitle
+
 \noindent
 
+GENERAL COMMENTS
+Please find below my review of the manuscript entitled "The AlborEX dataset: sampling of submesoscale features in the Alboran Sea" by Troupin et al. I think the data and the paper are relatively well presented. I especially enjoyed that all the files are netCDF format. While the data are limited to a very local application (a 6-day experiment from one sub-region of the Mediterranean Sea), the data are in high-quality and may be useful for process-related studies. Overall, the manuscript may be suitable for publication after moderate reviews. This decision is detailed below.
+
+MAJOR COMMENTS
+My major concerns on the actual version of the paper are the following:
+1. I think the text is not well organized. Some info on the data is find in Section 2 (AlborEX mission) and in Section 3.3 (Data Processing). This spreading of information makes the search for information through the paper difficult. I would bring Section 3.3. earlier in the paper and avoid to spread the information for each platform in different sections. Some specific comments below are related to this problem (e.g. mention of flags even before introducing them).
 
-We wish to thank the Reviewer for their constructive comments that really underline the aspects of the paper that needed to be further developed.
+The Section 3.3. has been moved earlier in the text, in the Section 2, so that the reader is aware of the processing and Quality Control done of the data. The information is now provided in two subsections:
+2.4 Processing levels, which has been extended and made clearer following other comments
+2.5 Quality control, where the general procedure is made explicit.
 
-This article (categorized as "review") by Troupin et al. is addressing a multidisciplinary data set collected in the western Mediterranean Sea during the AlborEX campaign. During the campaign in-situ observing devices (ships, floats, gliders, drifters\ldots) have been used (described here) but also satellite data. In the manuscript some aspects of the data set are described. As it stands now I do not recommend publication in ESSD. For the review I followed the ESSD evaluation criteria and also considered the general scope of the journal (as described on the website).
+2. The QC control is a weakness in this manuscript as it suggests that some QC is done, but it is not very clear on which data and how it is done. For some instruments, QC flags and their meaning are embedded in the files (e.g. float and drifters), but some doesn’t (glider files). This inconsistency is not so much a problem to me as long as it is clearly stated in the paper which files contains QC flags. These quality flags should however be defined in the text. There are several mentions of "quality flags" in the text and figure caption, but little explanation is provided on these. Figure 12 has 9 quality flags that are not even described (although I see their meaning in drifters and float files). Where the QC is easy to reference (e.g. "file generated with Socib glider toolbox vX.X", or "File QC done using Socib standard procedure following a procedure described in a certain paper", etc.), it should be mention in the netCDF file as well.
 
-First - Is this a "review" article? ESSD defines review articles as:
-"\ldotsmay compare methods or relative merits of data sets, the fitness of individual methods or data sets for specific purposes, or how combinations might be used as more complex methods or reference data collections."
-As I read it from the manuscript this is not the case. The current version of the manuscript reads more as a copy of data information from individual reports and the data section in scientific publications related to the experiment. As it stands, I do not see the criteria for a "review" type article fulfilled.
+To address these comments:
+A new table stating the meaning of the quality flag has been created (Table 2).
+A subsection “QC tests” has been inserted at the end of Section 2 to explain the general procedure for the quality control.
+In Section 3, for each platform type, a description of the specificities of the QC has been appended.
 
-\reply{We acknowledge the reviewer’s comment concerning the nature of the article. We made a mistake during the submission process. 
-Referring to the ESSD web page, we read that "Articles in the data section may pertain to the planning, instrumentation, and execution of experiments or collection of data.", and this is indeed the objective we had when submitting the manuscript. 
-However in the Submission page, the "Manuscript Type" did not offer the possibility to select it, hence we took another one which seemed the closest. We have contacted the editorial office concerning this and the manuscript type was changed on September 18, 2018.}
+Concerning the glider data: the toolbox referenced in the manuscript does not apply quality checks on the data in its current version. QC have been implemented but are still in testing phase. Once they are validated, the files will be reprocessed and made available.
 
+More generally, a lot of efforts have been made to ensure that the provided data are of the highest quality, even if that was not reflected in the submitted manuscript. All the SOCIB quality checks are explicitly described in the following document:
+QUID_DCF_SOCIB-QC-procedures.pdf
+SOCIB Quality Control Procedures
+Data Center Facility
+September 2018
+doi: https://doi.org/10.25704/q4zs-tspv
+and more tests are progressively developed in the current battery.
 
-Significance
-Three sub-criteria to evaluate:
-$\bullet$  Uniqueness: It should not be possible to replicate the experiment or observation on a routine basis. Thus, any data set on a variable supposed or suspected to reflect changes in the Earth system deserves to be considered unique. This is also the case for cost-intensive data sets which will not be replicated due to financial reasons. A new or improved method should not be trivial or obvious. 
-The data set is unique.
-(rating: 1 Excellent)
 
-\reply{Thank you for the appreciation}
+3. Why all processing level are not provided? The text suggests that all levels are provided (e.g. Table 3), but at the moment mostly L1 is provided. For gliders, L1 and L2 are provided. For the Float, L1 is provided for Arvor-A3 and Provor-Bio, but L0 for Arvor-C. Why? No explanation for this is provided (I think float data should be provided in L1 and L2 level as well). If some QC is applied on L1, maybe L0 should be provided as well to the future user? For glider L2 data, a choice is made regarding the vertical binning of the profiles. Which size these vertical bins are? This information should be provided somewhere.
 
-$\bullet$  Usefulness: It should be plausible that the data, alone or in combination with other data sets, can be used in future interpretations, for the comparison to model output or to verify other experiments or observations. Other possible uses mentioned by the authors will be considered.
+Following the definitions adopted at the SOCIB data center, Level 2 only exists for glider measurements: it means that we go from 3-dimensional trajectories to a time series of profiles (the observations are spatially interpolated. The description of the processing levels has been edited and clarified in the new manuscript.
 
-The current manuscript does not provide information that promote the reuse of the data set (it may for subsets). No attempt is made to provide a structured overview about the workflow that is linked to the creation of the data set and, equally important, the QA/QC are not provided in a transparent way. For example, in the netcdf data files I see different QC flags provided – one is for example "SOCIB Quality control Data Protocol". What does that mean? This is not an international standard. A data set description, as envisioned in this ESSD submission, should exactly describe such non-standard QC procedures. Which QA and QC methods were applied
-(give brief description, DOIs if applicable)?
+Missing L1 for Arvor-C: this comes from an oversight: the file has been made available in the new version of the dataset.
 
-\reply{
-We agree with the reviewer and to address these issues:
-\begin{itemize}
-\item A new section dedicated to data reuse has been be added (see below) and 
-\item the section "\textit{3.3.2 Quality control}" has been expanded and made more explicit.
-\end{itemize}
-}
+For the glider data gridding (from L1 to L2): the referee is correct, this has to be explained in the manuscript. 
 
-\newtext{\subsection*{Data Reuse}
+The gridding is performed by the function gridGliderData (https://github.com/socib/glider_toolbox/blob/master/m/processing_tools/gridGliderData.m), designed to get the glider trajectory data over instantaneous homogeneous regular profiles. By default, the vertical resolution (or step) is set to 1 meter in the present version of the processing, though it can be adapted by the user.
+For the spatial and temporal coordinates: they are computed as the mean values of the cast readings.
+For the variables: a binned is performed, taking the mean values of readings in depth intervals centered at selected depth levels.
 
-Three main types of data reuse are foreseen: 1.~model validation, 2.~data assimilation (DA) and 3.~planning of similar in situ experiments.
+These explanations are not in the new manuscript in the Section dedicated to the  Processing levels. 
 
-With the increase of spatial resolution in operational models, the validation at the smaller scales requires high-resolution observations. Remote-sensing measurements such as SST or chlorophyll-a concentration provides a valuable source of information but are limited to the surface layer. In the case of the present experiment, the position, intensity (gradients) and vertical structure of the front represent challenging features for numerical models, even when data assimilation is applied (Hernandez-Lasheras and Mourre, 2018).
+4. Nowhere the sensor configurations are specified. I think a table gathering this information is worth it. For each platform, the list of sensor should be presented with their configuration (sampling frequency, ADCP ping-per-ensemble, ADCP vertical bin size, etc.). This should include all variables collected, for example, from the ship meteo station from which little information (or none) is present in the text. Same for the glider where there is Chl-a and turbidity data in the files, but these were not mentioned in the text. A table gathering this information would be useful. 
 
-The AlborEx dataset can be used for DA experiments, for example assimilating the CTD measurements in the model and using the glider measurements as an independent observation dataset. The assimilation of glider observations has already been performed in different regions  (e.g. Melet et al., 2012; Mourre and Chiggiato, 2014; Pan et al., 2014) and has been shown to improve the forecast skills. However the assimilation of high-resolution data is not trivial: the the background error covariances tends to smooth the small scale features present in the observations.
+We agree with the suggestion and provided this information in the manuscript.
+Instead of a table, we feel it is better to have the information distributed in each subsection referring to the different platforms.
 
-Finally, other observing and modeling programs in the Mediterranean Sea can also benefit from the present dataset, for instance the Coherent Lagrangian Pathways from the Surface Ocean to Interior (CALYPSO) in the Southwest Mediterranean Sea (Johnston et al., 2018). Similarly to AlborEx, CALYPSO strives to study a strong ocean front front and the vertical exchanges taking place in the area of interest (see https://www.onr.navy.mil/Science-Technology/Departments/Code-32/All-Programs/Atmosphere-Research-322/Physical-Oceanography/CALYPSO-DRI for details).
-}
 
+5. A table regrouping all the platform with their basic configuration as well as their number of casts (when it applies) should be provided (sort of extended Table 3).
+
+For each platform, we indicated the basic configuration as well as the number of casts (for CTD, gliders and Argo floats).
 
 
-The multi-platform experiments performed in the Mediterranean Sea are of particular interest due to the variety of processes taking place in the area, so we added new references in the introduction, also referring to the approach combining in situ and remote-sensing (altimetry) data:
+TEXT-SPECIFIC COMMENTS
+- Figure 1 too small (should take page width)
+- Figure 2 too small (should take page width)
+→ Figures 1 and 2 have been enlarged in the new manuscript
 
-Similar studies comparing almost synchronous glider and SARAL/AltiKa altimetric data on selected tracks have also been carried between the Balearic Islands and the Algerian coasts (Aulicino et al., 2018; Cotroneo et al., 2016).
+- Figure 2 caption: there is mention of "flag data equal to 1" while these flag are not introduced in the text.
+→ SST is not part of the dataset, we just use them to illustrate the situation during the mission, this is why we did not go into details concerning the flag = 1, which is explicitly described in the caption (good data). 
 
-I also miss any information how/if this data is disseminated via international data centres and how the data QC and dissemination is coordinate with the respective observing networks (Argo, DBCP, \ldots). Seadatanet is been mentioned in the text but it is unclear which specific recommendations are given.
+- p.7, L1: The "total number of valid measurement" is not very useful. I would rather put the number of valid casts (see comment above on a new table with this info).
+→ We agree. The number of valid measurements (for the gliders) has been removed and replaced by the number of casts, in the new manuscript.
 
-All the data presented in this paper are open data and can be accessed through the SOCIB Data Center in a few clicks, without any registration. Moreover, the data API (\url{http://api.socib.es}) strongly improves the data access to users and the dissemination to national or international data centers, which can easily establish a data transfer if they want to include SOCIB data into their portal.
+- p.7, L6: "a spatial interpolation is applied on the original data, leading to the so-called Level-2 data, further described in Sec. 3.3." What does ’spatial interpolation’ means? Section 3.3 is not very explicit on this. I know you mean that the glider yos have been separated into downward and upward casts and then assigned to a geographical coordinate, but maybe this should be stated explicitly (and I don’t think "spatial interpolation" is an accurate description). Moreover, Is there any vertical interpolation done? Because there are still some NaNs in L2 data.
+The referee is right, it is not exactly an interpolation that is performed, but a spatial gridding.
+The gridding is performed by the function gridGliderData, designed to get the glider trajectory data over instantaneous homogeneous regular profiles. By default, the vertical resolution (or step) is set to 1 meter. 
+For the spatial and temporal coordinates: they are computed as the mean values among cast readings.
+For the variables: a binned is performed, taking the mean values of readings in depth intervals centered at selected depth levels.
+The NaN are indeed not removed by the binning process, but will be discarded or flagged once the file are re-processed with the new version of the Glider Toolbox.
 
-As of today, many international databases exist and frequently, new ones are created with new projects, making the data landscape complex and the making it tedious to extensively document the data flow between SOCIB data and those databases.
-For instance:
-\begin{itemize}
-\item all the drifters data are transmitted to the Mediterranean Surface Velocity Programme (MedSVP, \url{http://doga.ogs.trieste.it/sire/medsvp/});
-\item Most of the data are transmitted to the Mediterranean Operational Network for the Global Ocean Observing System (MONGOOS, \url{http://www.mongoos.eu/data-center});
-\item MONGOOS sends the data to the In Situ Thematic Assembly Center (INSTAC) of the Copernicus Marine Environment Monitoring Service (CMEMS, \url{http://www.marineinsitu.eu});
-\item The PROVBIO float is available in OAO database (Villefranche-sur-mer, \url{http://www.oao.obs-vlfr.fr/maps/en/}
-\item The Argo floats and drifters data are transmitted to the CMEMS INSTAC.
-\item \ldots
-\end{itemize}
+This has been amended in the new manuscript, in the section that describes the different processing levels.
 
-...
 
-Our approach to guarantee that the data are available to the widest community consists of
-Having the data easily accessible in a standard format (netCDF) through standard protocols (HTTP, OPEnDAP, …), and without any registration. This means that any user or entity can download all the files and include them in their portal or database.
-Providing a data API to make easier the data discovery: the role of the API is really to allow users to make request such as 
-"give me all the observations measured by the platform X (glider, drifter)" or 
-"Give me all the observations in the region located in the area Y during a given time period."
+- p.7, L15: "Interestingly, all the drifters exhibit a trajectory close to the front position" -> Not clear what "trajectory close to the front means". Moreover, is that really surprising that surface drifter would aggregate on a front? 
+We remove the “Interestingly”, as indeed it is expected and rephrased it to:
+“All the drifters moved along the front position (deduced from the SST images), until they encounter the Algerian Current”.
 
-The explicit mention to SeaDataNet is made because of their Regional Data Products, which we believe are of crucial importance for the scientific community needing a complete set of historical, in situ data. The data transfer from SOCIB to SeaDataNet is foreseen in the future.
+- Figure 8 caption: "for the duration of the mission" -> You mean the ship mission? Or the AlborEX campaign?
+→ We meant for the AlborEX mission; this has been made explicit in the new manuscript.
 
+- Figure 10: plots on the right column are of little information here (too low resolution to mean something), I would remove.
+→ We agree that the resolution is not as good as the Arvor-C float, but for completeness we would prefer not to discard them.
 
-(rating: 4 poor)
+- Table 1: "Period" should be replaced by "cycle length" as referred to in the text (Section 2.2.4).
+→ Modified
 
-$\bullet$  Completeness: A data set or collection must not be split intentionally, for example, to increase the possible number of publications. It should contain all data that can be reviewed without unnecessary increase of workload and can be reused in another context by a reader.
+- Table 1: netCDF file for Provor-bio indicates deployment end date 2015-04-
+24T12:02:59+00:00, which is different from this table.
+→ The correct date is indeed 2015-04-24T12:02:59+00:00. The table has been modified accordingly. 
 
-It is difficult to evaluate this point. However, the nutrient data is not mentioned but is, according to Pascual et al. 2017 part of the AlborEX campaign. I would expect that these data set are described here as well (and respective QC (e.g. GO-SHIP nutrient manual??) and associated uncertainty estimates.
+- Figure 11 caption: "quality flag" not defined.
+→ Quality flag with a value of 1 (meaning “good data”) is specified in the caption. We added a complete description in the text concerning this part.
 
-We agree with this suggestion and will add a specific section dedicated to the nutrient data. In relation to these data, we wish to add to the list of co-authors:
-Antonio Tovar-Sánchez, Instituto de Ciencias Marinas de Andalucía, (ICMAN – CSIC), Puerto Real, Spain
-Eva Alou, SOCIB
-Who were responsible for the acquisition and processing of these data during and after the cruise.
+- Section 3.3.1: A Section on processing levels, but they are not all provided. Why? I think all levels should be provided. This is related to a previous comment.
+→ The origin of the initial decision of not providing the L0 data for all the files is twofold:
+For some platforms (gliders), the L0 files are rather large and contain many variables related to the platform engineering, no to oceanography.
+Even if the files were not provided through the Zenodo platform, they are still publicly available using the SOCIB thredds server. 
+In the new version of the manuscript, we adopted a new way to distribute the data (the data catalog), in which the data files corresponding to all the processing levels are made available.
 
-We have now included the dissolved inorganic nutrients measured during Alborex (see new file alborex_nutrients.nc).
-This text was added to the new manuscript:
+- p.14, Level 2 (L2): "obtained by interpolating the L1 data" -> How L2 is obtained by "interpolating" L1? Isn’t L1 cut into casts that makes L2?
+→ Correct. It is not an interpolating but a gridding. The explanation of how this gridding is performed has been added to the manuscript.
 
-Samples for nutrient analysis were collected in triplicate from CTD Niskin bottles and immediately frozen for subsequent analysis at the laboratory. Concentrations of dissolved nutrients (Nitrite: NO2-, Nitrate: NO3- and Phosphate: PO43-) were determined with an autoanalyzer (Alliance Futura) using colorimetric techniques (Grasshoff et al,1983). The accuracy of the analysis was established using Coastal Seawater Reference Material for Nutrients (MOOS-1, NRCCNRC), resulting in recoveries of 97%, 95% and 100% for NO2-, NO3- and PO43-, respectively. Detection limits were NO2-:0.005 µM, NO3-: 0.1 µM and PO43-: 0.1 µM.
+Level 2 (L2): this level is only available for the gliders. It consists of regular, homogeneous and instantaneous profiles obtained by gridding the L1 data. In other words, 3-dimensional trajectories are transformed into a set of instantaneous, homogeneous, regular profiles.
+For the spatial and temporal coordinates: the new coordinates of the profiles are computed as the mean values of the cast readings. For the variables: a binning is performed, taking the mean values of readings in depth intervals centered at selected depth levels. By default, the vertical resolution (or bin size) is set to 1 meter. This level was created mostly for visualization purposes.
 
-Grasshoff, K., Ehrhardt, M., and Kremling, K. (Eds.) (1983). Methods of Seawater. Analysis, 2nd Edn.Weinheim: Verlag Chemie
+- p.14, Level 2 (L2): "It is only provided for gliders, mostly for visualization and post-processing purposes: specific tools designed to read and display profiler data can then be used the same way for gliders." -> Is there a problem with this sentence? I don’t understand it.
+→ We removed the part of the sentence starting with “post-processing purposes”
 
+- Section 3.3.1 / Table 3: Is L1 level for float equivalent to L2 level for glider? For consistency, I think profiling float should have L1 and L2 data as well since these instruments have similarities on the way they profile the water column…
+The three previous comments are related and can be solved by better explaining what are the processing levels we defined.
+The L1 glider data consists of a 3-dimensional trajectories, which means that both the longitude, latitude and depth change with respect to time. The Level 2 aims to have the same data on vertical profiles: the longitude and latitude don’t change for a given profile. This is illustrated in the figure below.
 
-(rating: 2 to 3)
 
-Data quality
 
-The data must be presented readily and accessible for inspection and analysis to make the reviewer’s task possible. Even if a data set submitted is the first ever published (on a parameter, in a region, etc.), its claimed accuracy, the instrumentation employed, and methods of processing should reflect the "state of the art" or "best practices". Considering all conditions and influences presented in the article, these claims and factors must be mutually consistent. The reviewer will then apply his or her expert knowledge and operational experience in the specific field to perform tests (e.g. statistical tests) and cast judgement on whether the claimed findings and its factors – individually and as a whole – are plausible and do not contain detectable faults. 
 
-I touched on that already under "Usefulness". In the manuscript no transparent QC assessment is presented. What were the methods of processing (provide key steps, DOI at least). What were, including quantification of uncertainties and qualification via flags, the results of the QA/QC procedures? Which were the major shortcomings of the data acquisition process and what could be done better in the future? For example, has the drifter data included in the European E–SurfMar data base and also in the DBCP global drifter data sets? Have the recommendations (Best Practices, Protocols) from E–SurfMar / DBCP considered? It looks like no commonly agreed standard has been used for some parameters – as "SOCIB Quality control Data Protocol" suggest? 
-(rating: 3)
 
-The QC procedure is described in the document QUID_DCF_SOCIB-QC-procedures.pdf
+- p.12, L1: "This type of current measurements requires a careful processing in order to get meaningful velocities from the raw signal" -> Why? What are the limitations that makes this instrument more sensitive compare to other ones?
+→ The main reason for this sensitivity is the fact that the vessel’s velocity is one or two order or magnitudes greater than the currents that have to be measured. It is thus critical to have good measurements of the vessel heading and velocity.
+
+A sentence has been inserted at the beginning of that paragraph and we removed the sentence “ It  hence it is relevant to have a quality flag (QF) assigned to each measurement”.
+
+- p.12, L4: "Figure 12 shows the QF during the whole mission." -> How QF are calculated?
+→ The QC procedure for the VM-ADCP is complex as it involves tests on a large number of variables such as:
+Bottom Track Direction
+Bottom Track Velocity
+Bottom Track error on velocity
+Bottom Track Depth from beam
+Sea water noise amplitude
+…
+with dependencies between them but also variables related to the vessel position and behavior (pitch, roll, speed, ...).
+The tests adopted are listed in the reference QUID document:
+
+QUID_DCF_SOCIB-QC-procedures.pdf
 SOCIB Quality Control Procedures
 Data Center Facility
 September 2018
-Doi: https://doi.org/10.25704/q4zs-tspv
-The procedure in based on the commonly agreed standards. 
+doi: https://doi.org/10.25704/q4zs-tspv
 
-The article has been re-organised and for each type of platform, a description of the quality checks performed on the corresponding data has been added.
+and the new manuscript now contains a summary of the ADCP QC procedure.
 
-Which were the major shortcomings of the data acquisition process and what could be done better in the future?
+The vessel's velocity is one or two order or magnitudes greater than the currents that have to be measured, hence this type of current measurements requires a careful processing in order to get meaningful velocities from the raw signal. The QC procedure for the VM-ADCP is complex as it involves tests on more than 40  technical and geophysical variables  (SOCIB Data Center, 2018). The different tests are based on the technical reports of Cowley et al. (2009) and Bender and DiMarco (2009), which aim primarily at ADCP mounted on moorings. The procedure can be summarised as follows:
+Technical variables: valid ranges are checked for each of these variables: if the measurement is outside the range, the QF is set to 4 (bad data). Example of technical variables are: bottom track depth, sea water noise amplitude, correlation magnitude.
+Vessel behaviour: its pitch, roll and and orientation angles are checked and QF are assigned based on specific ranges. In addition the vessel velocity is checked and anomalously high values are also flagged as bad.
+Velocities: valid ranges are provided for the computed current velocities: up to 2 m/s, velocities considered as good; between 2 and 3 m/s, probably good, and above 3 m/s, bad.
 
-Possibly the glider sampling strategy could be improve by increasing the relative frequency of surfacing, in order to have more information on the variables near the surface.
 
 
-Presentation quality
-Long articles are not expected. Regarding the style, the aim is to develop stereotypical wording so that unambiguous meaning can be expressed and understood without much effort. The article should express clearly what has been found, where, when, and how. The article text and references should contain all information necessary to evaluate all claims about the data set or collection, whether the claims are explicitly written down in the article, or implicit, through the data being published or their metadata. The authors should point to suitable software or services for simple visualization and analysis, keeping in mind that neither the reviewer nor the casual "reader" will install or pay for it.
+- Figure 12: Too small.
+→ the figure has been enlarged in the new manuscript.
 
-mostly OK (given the limitation outlined in the previous points). It would be useful to include a brief introduction into the "design of the experiment. Visualisation tools are not given.
-(rating: 2-3)
+- Figure 12 and text below: 9 different quality flag are presented without any introduction on how they are calculated.
+→ The new paragraph in the same section (see comment before) now explains how the quality flag are assigned. 
 
-A section "Design of the experiment has been added" in Section 2, after the "General oceanographic context"
-References to existing visualisations tools have been provided in a new section "4.3 Data reading and visualisation". It is worth mentioning here that a set of python functions are provided to read, process and visualise the content of type of file.
+- Section 3.3.2 is very short. Should be re-worked following comments above.
+→ We agree that the section dedicated to the Quality Control was too short. The QC are now described as follows:
+A general description in Section “2.5.2 QC tests” and
+ Specific explanations of the tests performed for each platform, making that part more self-contained. 
+COMMENTS ON DATA FILES
+The dataset consists of a relatively large number of files. I did my best but it was nearly impossible to review them all in details. 
 
+We really appreciate your time to extensively check of the files.
 
 
-Design of the experiment
-The deployment of in situ systems was based on the remote-sensing observations described in the previous Section. Two high­‐resolution grids were sampled with the R/V, covering an approximative region of 40 km x 40 km. At each station, one CTD cast and water samples for chlorophyll concentrations and nutrients analysis were collected. The thermosalinograph data were also used in order to assess the front position.
-One deep glider and one coastal glider were deployed in the same area with the idea to have butterfly-like track across the front. This turned out to be impossible considering the strong currents. 
-The 25 drifters were released close to the frontal area with the objective to detect convergence and divergence zones. Their release locations were separated by a few kilometers.
+Here are some comments:
+- There are very large spikes in deep glider turbidity
+→ yes, as the provided datasets for gliders have not undergone the quality checks (yet), there are still spikes and bad values for some of the variables. 
+We will explain it better in the text.
 
+- There are missing data for about 10h in deep glider data between May 25-26. Unless I missed it, no explanation for this are provided.
+→ The referee is right, some data are missing because the glider payload suffered an issue with the data logging software, resulting in no data acquisition during a few hours, during which the problem was being fixed. After that the data acquisition could be resumed.
 
-Specific comments
-P2/l.4: I do not agree with the statement: "a perfect observational system would consist in dense array of sensors present at many geographical locations, many depths and measuring almost continuously a wide range of parameters\ldots" – this "generalization" is trivial and useless. From an observing design point of view a "perfect" observing system must follow a design that will record only the observations that are needed to analyse the problem. As such the perfect observational system always depends on motivation for the experiment (or the problem in more general words) - in some cases a "perfect observing system" may comprise only one single sensor at one single depth at different locations if this has been found a sufficient approach for solving the problem (e.g. estimating global warming through a global tomography array). Please reformulate the statement along those lines.
+This explanation has been added to the corresponding section in the new manuscript.
 
-We agree that this formulation was not adequate and rephrased this part following this comment, as follows:
+“On May 25 at 19:24 (UTC), the deep glider payload suffered an issue with the data logging software, resulting in no data acquisition during a few hours, during which the problem was being fixed. After this event, the data acquisition could be resumed on May 26 at 08:50 (UTC).”
 
-"To properly capture and understand these small-scale features, one cannot settle for only observations of temperature and salinity profiles acquired at different times and positions, but rather has to combine the information from diverse sensors, platforms acquiring data at different scales and at the same time, similarly to the approach described in Delaney and Barga (2009). This also follows the recommendation for the Marine Observatory in Crise et al. (2018) , especially the co-localization and synopticity of observations and the multi-platform, adaptive sampling strategy."
 
-\end{document}
+- Oxygen data for both glider seems to suffer from thermal lag problems
+→ yes it is true, we have reached the same conclusion when plotting the oxygen data. It is planned to improve the complement the glider toolbox with new functionalities to address that issue. 
+We now mention this issue in the new manuscript.
+
+“Finally, oxygen data (not shown here) seem to exhibit a lag in the measurements. According to  …, this issue is also related to the time response of oxygen optodes.  
+
+- Provor-bio datafile contains levels down to over 7000m. Some problems are found:
+1. Why such long level dimension? 
+The 7000 comes is the depth dimension, as shown by the “ncdump -h” output:
+dimensions:
+    time = UNLIMITED ; // (71 currently)
+    depth = 7118 ;
+    name_strlen = 49 ;
+But it does not mean that the maximal depth is actually 7000 m or deeper, as it depends on the vertical resolution. Here the deepest measurements are on the order of 1000 m. 
+The profiles from PROVBIO are shown in the next 2 figures.
+
+
+2. No good data is found below ∼325m, although Table 1 suggest that the float is profiling to 1000m
+
+The long level dimension comes from the initial netCDF file.
+
+We confirm that the float acquired data up to approx. 2000 m, even though the vertical resolution is not as high as near the surface. We reproduce (see below) the Figure 10 from the manuscript, this time without limiting the depth range, in order to confirm the availability of data at that depth.
+
 
 
-\end{document}
\ No newline at end of file
+
+
+- Arvor A3 data file suffers from similar problem: file contains data only down to 115m while Table 1 says 2000m
+
+For the Arvor A3 we confirm that profiles are available up to approx. 2000 m. The “115” mentioned above are in fact the number of vertical levels provided in the file, not the final depth. Also see the figure above for the data availability.
+
+- Arvor-C data file (only L0 provided) do not contain metadata (no file attributes, etc.). In addition, missing data (at least for temperature) appears to me as very large numbers (9.969210e+36) that makes them difficult to manipulate.
+→ The L0 file with the metadata and the L1 file have been prepared and are now available. The link to the thredds catalog are provided below:
+L0: http://thredds.socib.es/thredds/catalog/drifter/profiler_drifter/profiler_drifter_arvorc001-ime_arvorc001/L0/2014/catalog.html?dataset=drifter/profiler_drifter/profiler_drifter_arvorc001-ime_arvorc001/L0/2014/dep0001_profiler-drifter-arvorc001_ime-arvorc001_L0_2014-05-25.nc
+L1:
+http://thredds.socib.es/thredds/catalog/drifter/profiler_drifter/profiler_drifter_arvorc001-ime_arvorc001/L1/2014/catalog.html?dataset=drifter/profiler_drifter/profiler_drifter_arvorc001-ime_arvorc001/L1/2014/dep0001_profiler-drifter-arvorc001_ime-arvorc001_L1_2014-05-25.nc
+
+
+R/V Socib CTD and thermosalinograph files say that units of temperature are "C". I prefer the convention from glider files which uses "Celsius".
+→ We take note of the suggestion and will perform the modification in a new release of the data files, as it involves a re-processing of several files from other missions). The referee is totally right, as the Unidata documentation (https://www.unidata.ucar.edu/software/netcdf/netcdf/Units.html) states that “Celsius” should be used, “C” meaning “Coulomb”.
+
+MINOR COMMENTS
+- p.2; L23: "makes it possible" -> makes possible 
+→ corrected
+
+- p.2; L23: "creation and publication of aggregated datasets covering the Mediterranean Sea" -> SeaDataNet is not only about the Mediterranean 
+→ replaced by “covering different European regional seas, including the Mediterranean Sea”
+
+- p.2; L32: "thanks due to" -> thanks to 
+→ corrected (removed “due”)
+
+- Section 2.2.1: "CTD surveys" or CTD legs? 
+→ corrected (legs)
+
+- Glider L1 files (e.g. dep0012_ideep00_ime-sldeep000_L1_2014-05-25_data_dt.nc) say that the project is "PERSEUS". Is that right? There is no mention of the AlborEX project in the file header. 
+→ Correct, AlborEx was the Subtask 3.3.4 of PERSEUS project, but in this case AlborEx was not explicitly mentioned in the file header. This will be added during the next re-processing of the data files.
+
+- p.10, L1: problems with latitude longitude degree symbol. 
+→ corrected
+
+- p.10, L5: temperature, salinity and T,S is use on the same line. Please homogenize. 
+→ replaced by “In addition to these variables”
+
+- p.12, L17: "Network Common Data Form 
+(netCDF, https://doi.org/(http://doi.org/10.5065/D6H70CW6, last accessed on August
+3, 2018)" Is there a mis-placed parenthesis?
+→ Corrected, the “(“ afer .org has been removed.
+
+- p.13, L2: problem with file name (too long for page) 
+→ Corrected (new line added).
+
+- p.16, L25: How stable in time the python codes made available on Github will be?
+→ Generally, reading netCDF files with Python is an easy task, as it is with other languages (MATLAB, Julia, R), so we don’t expect any difficulties for the data users. Here what we did is to provide a set of the Python codes written to show how to read the data and reproduce the plots of the papers, as we think it might save time if somebody wants to create something similar, or even reproduce the paper plot.
+With Python it is relatively straightforward to use virtual environment, which allows one to work with specific version python modules. If a user works with a virtual environment which has the same packages versions as those specified on GitHub (file requirements.txt), then the code will run (since the netCDF files will be the same). 
+Even if issues occur, we think that providing the codes employed to manipulate the data files, along with the data, is a step toward the reproducibility of the results.
+
+
+\bibliographystyle{copernicus}
+\bibliography{AlborexData.bib}
+
+
+\end{document}
+