Metrological assessment of multi‐sensor camera technology for spatially‐resolved ultra‐high‐speed imaging of transient high strain‐rate deformation processes

The present work proposes a metrological route for capturing spatially‐resolved ultra‐high‐speed kinematic full‐field data from high strain‐rate experiments and multi‐sensor camera technology. This paper focuses, from an application point of view, on highly resolved rotating mirror cameras, such as the Cordin‐580. This camera allows 78 frames of 8 megapixels to be recorded at up to 4 million frames per second (fps). The optical apparatus induces distortions that need to be taken into consideration. Distortions are modelled with Zernike polynomials and recovered using Digital Image Correlation (DIC) with a tailored synthetic speckle pattern. Effective displacements can then be quantitatively obtained with subpixel precision. After an assessment of the calibrated camera performance, this methodology is used to record, at 480,000 fps, the fracture of a pre‐notched sample subjected to an inertial impact test. The kinematic fields obtained quantitatively captured the events occurring during the test, such as the compression wave and the induced Poisson effect, the Mode‐I crack initiation and the shear strain concentration at the notch tip. The achievement of a DIC displacement and strain random error of, respectively, 5 μm (0.15 pixels) and 2 mm m−1, combined with a high spatio‐temporal sampling, provides a promising way for quantitatively analysing very fast transient and heterogeneous phenomena.

captured and understood, both high temporal and high spatial resolutions. Recent developments of full-field measurement methods, combined with the development of time and space-resolved ultra-high-speed cameras, opened the way to the study of such dynamic phenomena during high strain-rate tests.
In that context, in 2007, Kajberg and Wikman [1] estimated the viscoplastic parameters of a mild steel during an impact test. This was performed by taking 15 frames of 166 Â 192 pixels at 125,000 frames per second (fps) with a high-speed camera. Using a speckle pattern and Digital Image Correlation (DIC), the authors were able to retrieve the displacement and strain fields with random errors of 0.1 pixels and 15 mm m À1 , respectively. The material parameters were then estimated using an inverse Finite-Element Model Updating technique (FEMU). In 2015, Gao et al. [2] studied brittle fracture mechanisms considering a notched semi-circular specimen made of concrete subjected to dynamic three-point bending. By taking 24 photographs of 1082 Â 974 pixels at a speed of 180,000 fps and with the use of both DIC and elastic fracture mechanics theory, the authors were able to extract the crack-tip position, its speed and the dynamic fracture initiation toughnesses of the material for different loading rates. In 2018, Forquin et al. [3] studied the tensile response of concrete when exposed to high strain-rates. Using the grid method, the authors extracted the displacement and the strain fields from 102 images of 312 Â 260 pixels taken at 1 million fps. Furthermore, experimentally computing acceleration fields and using the Virtual Fields Method (VFM), the mean stress and Young's modulus were identified. Results were consistent with the data obtained using numerical simulations and a PRM damage model. These works demonstrate the ability to measure mechanical values of interest during a high strain-rate experiment. However, the presented works highlight the actual systematical trade-off to find between the number of frames used to sample the event, the temporal resolution and its spatial counterpart, when using a monosensor technology.
On the other hand, multi-sensor technologies provide two ways of recording at high and ultra-high frame rates while maintaining an image resolution higher than 1 megapixel. Gated-intensified CCD cameras split the beam into the number of captured frames (usually less than 16). The resulting low intensity beams have to be amplified. This technology allows the user to record few frames at ultra-high-speed with an image resolution above 1 megapixel. This technology has been successfully used for dynamic experiments. [4][5][6][7] However, these works highlight a high level of noise: about 3.5% of the dynamic range in Pierron et al. [5] According to the authors, this noise, spatially correlated, is attributed to a 'leakage' of the photons from an amplifier to the pixels surrounding it. It creates, in practice, significant blurring of the images, compromising the quality of strain measurements. In 2019, Rubino et al. [8] delivered an interesting comparison of DIC fields obtained from gated-intensified and mono-sensor technologies, emphasising such a strong increase of measurement uncertainty. Moreover, as only few frames can be recorded for practical consideration, a trade-off has to be made anyway between the acquisition speed and the duration of the event recorded. The second technology is, to the authors' knowledge, the only one that allows a significant number of frames to be recorded with an image resolution higher than 1 megapixel. It relies on a rotating mirror and multiple sensors. Haboussa et al. [9] studied in 2011 the effect of a hole or a pre-crack on the propagation of a dynamic crack. The authors recorded the dynamic crack propagation in a polymethylmethacrylate (PMMA) sample at 200,000 fps using a rotating mirror camera Cordin-550. They were able to extract the crack-tip position, which was in good agreement with the results of X-FEM simulations. Using the rotating mirror camera Cordin-550, Jajam and Tippur [10] studied in 2011 the dynamic crack propagation through a glass inclusion in an epoxy pre-notched sample. By recording the events at 300,000 fps, with 32 images of 1000 Â 1000 pixels, Jajam and Tippur were able to extract the crack-tip position, its speed and the stress intensity factors (SIFs). They could observe the influence of the bond between the inclusion and the matrix as well as the influence of the inclusion's position on the crack growth and the SIFs. Similarly, Lee et al. [11] studied the dynamic fracture of unidirectional graphite/epoxy composite. A Cordin-550 was used to record the events between 100,000 and 250,000 fps. Using DIC, the authors were able to extract the SIFs values for different samples.
While a series of works have used such technology to observe in detail dynamic processes and even extract some fracture mechanics parameters, only a couple have achieved quantitative measurements and performed a metrological analysis of such cameras, where important measurement bias can arise due to the multiplicity of the optical paths. [4,12,13] Moreover, in these few valuable works, displacement and strain noise floors were evaluated using a route that may be questionable. Indeed, the methodology, which will be referred to as sensor-to-sensor approach, consists in using two sets of successive and independent image sequences of static samples and in evaluating kinematic errors by comparing the images taken by the same sensor. Hence, displacement fields are computed for each image in a different and distorted configuration. This has to be opposed to the classical Lagragian approach in mechanics where the whole kinematic history has to be expressed in a single undistorted reference configuration. Such a procedure is a way to bypass the issue of evaluating individual sensor distortions by assuming that they are sensor dependent but small and constant enough from one shot to another. In the present work, it will be demonstrated that these assumptions are not necessarily fulfilled. Furthermore, it will be shown that the use of these assumptions may lead to significant errors on the displacement and strain fields. In that context, the various authors only obtained lower bounds of displacement and strain random errors, in the order of 0.1 pixels and 1 mm m À1 , respectively. Kirugulige et al. [12] went one step further by identifying an affine distortion correction for each sensor, in order to mitigate the impact of the distortions induced by the optical apparatus.
While a proper metrological assessment has yet to be done before using this kind of camera to perform proper DIC, the technology has now achieved an unprecedented combined spatio-temporal sampling and length of recording, compared to the other available technologies. In that context, the present work proposes a new, potentially more robust, calibration procedure dedicated to such a multi-sensor ultra-high-speed camera. The dedicated calibration procedure for the ultra-high-speed camera Cordin-580 is presented first. Particular attention is given to the distortions induced by the complex optical apparatus and the chosen way to model them. The metrological issues raised by the camera will also be discussed, and the performances obtained were analysed. In a second part, the proposed methodology is applied to a real test case. The Kalthoff-based inertial impact test is presented, and then captured kinematic measurements are described and analysed.

| DIGITAL IMAGE CORRELATION
The methodology relies on DIC at two stages: first for the estimation of the total displacement and also for the distortion model calibration. In this section, the principle of DIC is presented, as well as some details about its implementation in the open-source software UFreckles. [14] DIC is based on the optical flow equation, which stands for the conservation of brightness between one reference image f and a deformed image g. This fundamental principle is where u X ð Þ is the sought displacement vector at the position X in the frame of the reference image. Note that this nonlinear inverse problem is ill-posed, since two components are sought for the displacement but only one equation can be written for the grey-level conservation. Images are discrete by nature, since they are acquired by a sensor composed of a matrix of photosites where photons are collected. The grey level is thus known at the integer pixel position X p . In the following, F is a vector with as many rows as pixels considered in the region of interest (ROI), which collects the value of f at the pixel location X p . In this same spirit, G collects the value of the advected deformed image g X p þ u X p . To reduce the number of unknowns in the problem, a finite-element description of the displacement fields is adopted. A mesh conforming to the ROI is thus created. It might be a regular mesh of square elements [15] or, as in finite-element simulation, a mesh of arbitrary shaped finite elements of different types. Only linear elements are considered in the following; they can be either quadrangles or triangles. A generic form for the displacement field is Note that U n is the nodal displacement at the node n, which has two components. In the same manner as for the image, N will collect the value of the finite-element shape functions N n at each pixel of the reference image. N is thus a matrix with a number of rows equal to the number of pixels, and the number of columns is the number of nodes N n . If U is the vector collecting the nodal displacement vector, then the vector collecting the displacement vector at all of the pixels of the ROI is The resolution of the optical flow equation, even in its discrete format, is a non-linear problem that will be solved following an iterative process. Given an initial vector of nodal displacement U i , a solution increment dU is sought. After a linearisation of the deformed image advected by this new solution, the problem is written in a matrix format for all of the pixels within the ROI: In this equation, r X G,r Y G are vectors collecting the value of the two components of the advected image gradient, dU X , dU Y vectors collecting the components of the vector nodal displacement increment dU and . * stands for the element-wise multiplication of the vector/matrix element along the line index. After some manipulations, one obtains the following linear system of equations: In this system of equations, the gradient of the advected image has been replaced by the gradient of the reference image. This generally affects the convergence speed, but it allows for this gradient to be computed once. This overdetermined system is solved in a least squares sense by assembling the usual operator of the sum of squared difference (SSD) criterion in DIC: and b ¼ ðr X F: * NÞ T ðF À GÞ ðr Y F: * NÞ T ðF À GÞ In the following, this DIC formulation is used in order to obtain the total displacements (ensuing from the effective mechanical fields and the distortions from the camera). As we will see later on, no camera image is free of distortions, so none can be considered as a reference. In this case, the reference image f is a synthetic image of a tailored pattern. This pattern is then used to engrave the surface of the sample (either the target for calibration or the experimental sample). The images acquired by the camera are considered as deformed images g.

| Distortion model
In this study, the distortions are considered continuous and bounded. It is then reasonable to approximate them with polynomials. Therefore, the distortion field is written as where {Q k } is the family of polynomials used to approximate the distortion field and {P k }is the corresponding coefficients. In the present case, Zernike's polynomials are considered. These polynomials are generally used in ophthalmology [16] to describe the retina's deformation and aberration. They are defined on the unit circle and thus rely on polar coordinates: θ [0; 2π] and ρ [0; 1]. The polynomials are defined as follows: [17] Z i j ðρ, θÞ ¼ R i j ðρÞcosðiθÞ, where j is the order of the model, j ≥ i ≥ 0 and R i j are radial polynomials defined as Figure 1 displays the various polynomials involved in a fifth-order model. Using this basis gives a physical meaning to the modes: for instance, Z À1 1 and Z 1 1 are the stretch and rotation components when Z À1 3 and Z 1 3 describe a barrel effect.

| Calibration of the distortion model
To calibrate the distortion model, that is, to obtain its parameters from a set of images, an FE-DIC problem is solved on a reduced basis. This reduced basis is the finite-element approximation of the Zernike polynomials of order j. Hence, the component in the X direction of the sought displacement has the following form: where X n are the nodal positions. Using the notation introduced in Section 2, this description of the displacement field is recast as F I G U R E 1 Zernike modes up to Order 5 where, in the same spirit as in Section 2, Q collects the value of the selected Zernike polynomials at the nodal positions and P X collects the amplitude of the polynomials for the X component of the displacement. Using this reduced description of the displacement fields, the DIC problem is rewritten as where Q is a block diagonal matrix filled with two Q matrices. The incremental correction of the Zernike amplitudes for the two components of the distortion field is thus obtained directly from the images of a dedicated target obtained by the camera (used as deformed images) and a synthetic image of the target (used as a reference image).

| Computation of the effective displacements
Once the distortion model is calibrated, the effective displacements u r have to be retrieved from the total displacements u T obtained by DIC. However, its computation is not straightforward since it results, in the general case, from the composition of the distortion and the sample deformation as presented in Figure 2. This leads to the following non-linear equation: where u di is the distortion field during the recording. Contrary to the DIC problem presented above, this non-linear inverse problem is well-posed. There is thus no need to solve it in average in a least squares sense. Hence, the resolution is performed pointwise. From an initial estimate of u r ¼ u T X ð ÞÀu di X ð Þ, an incremental correction dU r to the current nodal displacement U r is sought. Equation (15) is linearised assuming that the correction is small, and the following linear system is solved at each node of the finite-element mesh used for estimating u r : After solving this linear system at each node, the effective displacement U r is updated using the estimated correction dU r . Note that the distortion field u di is defined by polynomials whose gradient can be estimated analytically. The convergence of this non-linear iterative process is thus extremely fast, robust and accurate. Convergence to a numerically zero correction is usually obtained after two to three iterations. Notice that the initial guess suggested above F I G U R E 2 Schematic diagram of the transformations for a static sample (in blue) and for a moving sample (in red). This figure emphasises the fact that the distortions can differ from one shot to another. u r is the effective displacement, while u ss denotes the displacement obtained using a sensor-to-sensor approach corresponds to an additive composition of the distortions and the effective sample transformation, which in practice is closely related to the solution obtained when using a sensor-to-sensor approach. A correction to this first (rough) estimate is accessed through the proposed procedure. Figure 2 summarises the various transformations occurring when recording a static sample (in blue) or a moving sample (in red). X , X d , x and x d denote, respectively, the reference configuration, the reference configuration but distorted, the deformed configuration and the deformed configuration but distorted. In addition, u d1 and u d2 denote, respectively, the distortion fields when recording a static (moving) sample. Indeed, we will see in Section 4.3 that a non-negligible level of variability can be observed on camera distortions from one shot to another, which must be taken into account when attempting to compare series of images. Finally, u ss denotes the displacements obtained when using a sensor-to-sensor approach (e.g., Moulart et al. [13] ). From this figure, the following relation can be deduced: It follows that the first-order error, ϵ, when using a sensor-to-sensor strategy can be computed as follows: In this relation, three terms appear: the gradient of the experiment's distortions, the effective displacement and the difference between the calibration's distortions and those of the experiment. Hence, for this error to be negligible, three conditions have to be met: the displacements must be small enough during the experiment, and the distortions must be small and constant enough from one shot to another. It will be shown that these conditions are not fulfilled when using a Cordin-580. An estimation of the error introduced when using a sensor-to-sensor (or additive) approach in comparison with a true composition approach, in terms of resulting displacement and strain fields, will be discussed in Sections 4.5 and 5.

| Target design and manufacturing artefacts
Usually, DIC is performed between two images taken by the same camera. In the present case, as distortions are induced by the camera, a true reference is needed. In order to have an undistorted image of reference, a synthetic one is created. Several kinds of targets are proposed in the literature for optical calibration. Usually, dot patterns [12] or grids [4] are used. However, in this study, a speckle pattern will be used. It will provide information all over the sensor, for all distortion spatial frequencies, and will fall into a single DIC framework. Several articles have been published tackling the issue of generating optimised speckle patterns for DIC. [18,19] This is usually done by working in Fourier's space then applying an inverse transformation. [20] In the present methodology, an image twice the size of the sensor's size is generated, in order to avoid any boundary effects. A ring is constructed in Fourier's space, in which the amplitude and the phase are randomly attributed following a Gaussian law between À1 and 1. The radius of the ring defines the size of the pattern, the thickness defines the pattern's variation and the random values define the pattern intensity's variation. The speckle pattern is obtained using an inverse fast Fourier transform (IFFT). It is then cropped to the sensor's size ( Figure 3A). The obtained pattern is then dynamically renormalised in 16 bits so that the whole range of grey level is used ( Figure 3B).
Several techniques have been tried to transfer this synthetic speckle pattern to a physical target: using a standard printer, using a professional printer on a dibond plate and using a laser etching machine. All of these induced some specific artefacts. Here, only the artefacts induced by the laser etching machine will be discussed, since this is the technology used in the rest of the present work. The laser etching machine produces a beam with a size of approximately 200 μm. By controlling the intensity of the beam and its speed, the speckle can be printed on a PMMA sample with an approximate etching depth of 50 μm. The effective displacement u raw obtained (after deconvolution; see Equation 16) when performing DIC between the synthetic speckle pattern and the first frame taken by the camera is depicted in Figure 4A. Vertical and horizontal (not shown) bands with a magnitude of approximately 1 pixel are detected. To study the evolution of these bands over time, a pixel line orthogonal to the bands (depicted by the black dashes in Figure 4A) is plotted for all of the frames ( Figure 4B). This highlights the fact that these bands are stationary. Hence, it is thought that these bands are induced by the printing method (for instance by the screws controlling the displacement of the beam head). Considering the very low amplitude of such systematic bias induced by the printing technology, and in order to cancel out contribution, in the rest of this work, the effective displacements will systematically be corrected, in an additive manner, as follows: where i is the frame number. Notice that this procedure implies that the first frame taken, during the experiment, is an image of the sample at rest. In this case, it only encloses such a stationary printing bias. Notice that another route would have been to take a reference image of the sample at rest with a standard, highresolution camera. The speckle pattern would be generated by manually spraying the sample with black and white paint. Nevertheless, the questions associated with lens distortions, reproducibility of the set-up and even reproducibility of the sample pattern would have remained open. In that context, it has been decided to keep a very generic and highly reproducible route associated with such a small and very characteristic printing bias.

| Error definition
The indicator classically used in DIC to quantify the quality of the measurement is the mean value and the standard deviation of displacement fields computed using a series of images of a static sample. The first one refers to the systematic error (bias), while the second assesses the random error, that is, the uncertainty. However, since the camera used in this work relies on multiple sensors, systematic error can be different from one frame to another; thus, the global F I G U R E 4 Example of the artifacts induced by the printing and their evolution on the sensors standard deviation, over a series of images, may include both systematic and random errors. To avoid any confusion, systematic error is simply obtained from the mean error over field and sensors, while the global camera random error indicator is obtained from the square root of the average of the sensor variances ( V s ; see Equation 20), noted as σ cam . In comparison, single-sensor random error (see, e.g., in Section 4.2) is simply noted as σ(s).
where U s denotes the displacement field obtained for the sensor s.
Notice that such definition of the systematic error and the random error is totally fair as long as we deal with displacement, strain and strain rates, which is the objective of this paper. Nevertheless, it does not clearly highlight the error arising when differentiating displacement from one frame to another, that is, dealing with speed and acceleration. Indeed, in that case, an additional indicator capturing the systematic error jump from one frame to another would need to be defined. It may be computed, for instance, as the standard deviation over a series of images of the mean displacement value per sensor.

| Presentation of the Cordin-580
The camera used in this work is a Cordin-580. This camera is a rotating mirror camera that is able to capture 78 images with a resolution of 2472 Â 3296 pixels (i.e., 8 megapixels), up to a speed of 4 million fps. For speeds below 500,000 fps, an electric turbine is used for mirror rotation. Between 500,000 and 1 million fps, a dedicated gas turbine is fed with compressed air. Finally, above 1 million fps, both the gas turbine and the camera must be fed with helium to increase the rotation speed and mitigate friction. Within the framework of this metrological work, only the electric drive will be used, so we will focus on speeds lower than 500,000 fps. Nevertheless, the proposed methodology is not turbine dependent.
The optical apparatus used in the camera is depicted in Figure 5A. The light beam, depicted by the black arrows, enters the camera through the objective. It then encounters a cube beam splitter that will either transmit the light or reflect it with an angle of 45 . The light is then reflected on mirrors until it reaches a lens. After this lens, another mirror reflects the light beam onto a three-faced rotating mirror. Finally, the light goes through a lens, used to mitigate the bias induced by the mirror rotation over individual sensor exposure time, and eventually reaches the sensor. Additionally, some specificities of the camera's geometry are worth mentioning. First, in order to let the light beams pass, Sensors 40 and 80 do not exist; thus, black images are given for these theoretical sensors. Furthermore, due to their positioning, Sensors 21 to 60 are always the ones hit by the beam reflected by the beam splitter. For the same geometrical reasons, Sensors 1, 39, 41 and 79 are illuminated when the rotating mirror is nearly perpendicular to the light beam. On the contrary, Sensors 20, 21, 60 and 61 are illuminated when the rotating mirror is hit by the beam with a shallow angle. This is illustrated in Figure 5B.
Since each optical element (mirror and lens) may have an influence on the final distortion field, it is in practice impossible to determine the contribution of each one. Therefore, a phenomenological model has been chosen (see Section 3.1). However, it is possible to identify some physical dependencies of the distortion field. Indeed, given that the beam is split, the distortion field may depend on whether the light has been transmitted or reflected by the beam splitter. In addition, since the rotating mirror has three faces with their own defects, the distortion field may also depend on which face reflected the light. Furthermore, since there are 78 independent lens-mirror-sensor combinations, the field may differ from one sensor to another. At last, as the mirror can rotate at a speed up to 16,000 rotations per second (RPS), inertial forces may deform the mirror making the distortion field speed dependent.
The objective of the following sections is to identify the minimal bricks and parameters that we need in order to statistically capture camera-induced intrinsic distortions.

| Single-shot model
Capturing high-order distortions requires a high number of polynomials, which thus decreases the robustness of the identification, especially with respect to noise. To enhance the robustness while maintaining an accurate identification, one can reduce the order of the basis used. In order to find the optimal order of Zernike polynomials, several orders (from 2 to 7) are used to perform a calibration on a particular frame. Indeed, since each sensor has its own focusing system, a relatively significant variation of the focus can be observed from one sensor to another-thus the sharpest image, using the criterion from, [21] is used to identify the model parameters. Once the order is chosen, a calibration is performed on all of the frames to ascertain its relevance and see how it behaves as a function of image sharpness.
Frame 12 is first used to perform different calibrations using a different order of the Zernike basis (from 2 to 7). Figure 6 depicts σ(12) obtained versus the basis' order for both directions X and Y. As expected, the higher the order, the lower the projection error. From this figure, a sixth-order basis is chosen since no significant improvement is achieved with higher orders. This sixth-order basis leads to 28 sought parameters per direction. To evaluate this choice and underline relative image sharpness influence, a calibration is performed on a whole image set taken at 480,000 fps. Figure 7 depicts the random error obtained for each frame in both directions. Notice that the higher the sharpness, the lower the errors. The influence of the sharpness on the projection error is clearly visible-minimising the issue of focus variation from one sensor to another will be tackled in future work. In the end, a calibration with a sixth-order basis leads, on average, to a displacement random error, noted as σ cam , of 0.084 and 0.078 pixels in the X and Y directions, respectively. It is interesting to notice that, in practice, the observed variability of the displacement random error from one frame to another is not random over time. This point can be observed in Figure 8, where random errors are displayed, not as a function of sharpness but rather as a function of sensor number, that is, in the order they appear within the recording time sequence. The relative sharpness variability is still underlined through the use of a linear colour scale. The two dashed lines delineate the sensors hit by the reflected beam from those illuminated by the transmitted beam. Let us recall that Sensors 40 and 80 are non-existent and are replaced by dark images. Hence, their random errors are equal to zero. In Figure 8, two particular signals are obtained: for the X direction (see Figure 8A), random errors have a square-like signal, whereas for the other direction (see Figure 8B), they have a triangular-like signal. It has been verified that these signals are obtained for all of the speeds tested. In the X direction, the random error F I G U R E 5 Schematic diagram of the Cordin-580 is higher for sensors in the centre of the timeline, meaning when light beams are reflected by the beam splitter ( Figure 5A), from Sensors 21 to 60. Hence, the reflection by the beam splitter seems to increase the noise obtained on the concerned sensors. On the other side, the triangular signal obtained on Y direction seems to be related to the angle between the mirror face and the incident light beam. Indeed, the random errors are minimal when the light beam hits the mirror face perpendicularly, that is, close to Sensors 1, 39, 41 and 79, and increase when the angle of reflection becomes more and more shallow (up to Sensors 20 and 60). Notice that, in practice, sharpness issues (see Figure 7) and incident beam angle issues (see Figure 8B) are closely related. Indeed, shallow angles increase the image blurring. Therefore, such a random error pattern over time is somewhat inherent to the technology.
Finally, Figure 9 depicts distortion fields for both directions identified on a particular frame. It highlights the complexity of the distortion fields. Furthermore, let us note that the amplitude of the distortion identified is about 40 pixels, F I G U R E 7 Random errors obtained for all frames of a shot taken at 480,000 fps, with a full sixth-order basis F I G U R E 6 σ(12) versus the polynomial order which is non-negligible compared to the effective displacement that we wish to capture during real experiments. This further justifies the need to properly model these distortions.
It is observed, in this section, that a sixth-order Zernike polynomial basis and sensor-dependent optimised parameters can capture the complex distortion pattern induced by the camera apparatus for a single shot reasonably well. While the random error is mainly kept below 10 À1 pixels, it clearly remains sensor dependent at least for two connected reasons: relative sharpness variation and relative sensor position within the rotation sequence. Such variabilities can potentially be slightly mitigated by finely tuning the individual focus ring but cannot be eliminated.

| Camera model
The purpose of the camera model is to deal with potential parameter variation from one shot to another. As introduced earlier, many parts of the apparatus can affect the ultimate distortion perceived by each sensor. Some bias can be systematic, some probabilistic like the impact of the mirror face and some random like vibrations. The final objective is to F I G U R E 8 Example of the random errors obtained with a shot at 480,000 fps, in both directions have a unique model that is statistically representative of the distortions induced by the camera and calibrated once for all. To do so, the parameters' dependencies are investigated over a large set of recording sequences. To study the possible impact of the rotation speed, shots have been taken for different speeds. In order to statistically have each sensor illuminated by each face of the mirror, several shots have been taken for each speed. Then, using the optimal order polynomial basis found previously, the distortion parameters can be identified for each frame and for each recording shot. Finally, a global camera model is constructed by averaging the parameters associated with the same sensor, the same mirror face permutation (1 over 3) and rotation speed. That is to say that, for a given speed and a given sensor, the model will have three possible values for each parameter, depending on the mirror face that illuminated the sensor. In the end, the camera distortion model is parameterised by 28 Â 2 (directions) Â 3 (faces) Â 78 (sensors) parameters per acquisition speed, that is, a total of 13,104 parameters.
In order to carry out the construction of the model, the mirror face illuminating each sensor has to be determined. Since the mirror is rotating, and since the sensors' layout is known, only the mirror face illuminating one of the sensors F I G U R E 9 Example of distortion field obtained on a particular frame of a shot at 480,000 fps has actually to be determined. Using optics considerations and the fact that the mirror is not a perfect equilateral triangle, the mirror face can be determined using the parameter corresponding to the rigid-body motion in the Y direction. Indeed, when plotting the value of this parameter for all of the shots, three distinctive clusters can easily be identified by means of the k-means clustering algorithm (see Figure 10, each cluster is depicted by a colour). In addition, several typical parameter variabilities can also be underlined. It is observed, in Figure 10, that depending on the mirror position when triggering (one possibility among three permutations) a rigid-body motion variation of up to 12 pixels can be obtained from one shot to another. It particularly calls into question the classical methodology, which consists in using a previous recording sequence as a reference to cancel out distortion bias. Furthermore, parameter variation from one rotation speed to another can also be observed on the plot. It induces about 5 pixels of variation over the studied range. Finally, let us note that, at a given rotation speed, there is a spread of about 2 pixels even between the shots from the same cluster. This trend illustrates the variability of the parameters, which is attributed to the vibrations induced by the system. Notice that 2 pixels means a rigid-body motion of 11 μm (with the current experimental settings), which is minor when considering an electric drive rotating at 2000 RPS. It is obvious at that stage that averaging parameters over a large series of calibration shots will increase the robustness of the model, but it is also clear that the residual ±1 pixel variation within individual clusters cannot be eliminated and will lead to a slight increase in the measurement errors obtained, in the previous section, on a single shot.
Once the camera model has been built, it can be reapplied to the calibration shots. The random errors can then be computed and compared to the one obtained previously, where a model was specifically identified for every single shot. In Table 1, the random errors obtained using the single-shot model are compared to those obtained from the camera model, for the shots taken at 480,000 fps. The systematic error increases by an order of magnitude but remain reasonably low; see, for example, the systematic Y displacement error, which is 0.15 pixels. Nevertheless, random errors remain in the same order, that is, kept below 0.1 pixels. Note that the levels of strain error are similar to those obtained using a single-shot model with a negligible systematic error lower than 40 mm m À1 combined with a random error lower than 2 mm m À1 .
Let us note that the pairing between the sensors and the mirror faces illuminating them is a priori unknown during an acquisition. Hence, it is necessary to apply the three possible permutations of the global camera model to the experimental data. For the same reasons as explained earlier, the use of the wrong permutation will introduce non-physical displacements in the Y direction, for example, a displacement jump of 12 pixels (see Figure 10), from Sensors 20 to 21 and from Sensors 60 to 61. As a consequence, in what follows, the correct permutation, that is, appropriate model F I G U R E 1 0 Amplitude of Y 0 0 versus the mirror's rotating speed, for each shot taken, for one frame parameters, are identified manually using, as a figure of merit, the time variation of the displacement field obtained in the Y direction.

| Extrinsic parameters
As explained earlier, the final objective is to have a unique camera model calibrated once for all. However, since the experimental conditions between the calibration procedure and a true experiment may differ, the evaluation of extrinsic parameters has to be addressed.
Since the positioning of the sample, during calibration and/or real experiment, relative to the camera sensor is done manually, measurement fields may differ by an affine transformation. Hence, a correction to the camera model's parameters has to be found and applied, in order to account for the change of these extrinsic parameters. The difficulty here lies in the fact that the sought affine transformation is composed with all of the distortions produced along each individual optical path. Therefore, this does not necessarily produce the same effect on each sensor, especially from one side to another of the beam splitter. Since our camera model is both phenomenological and statistical, a proper deconvolution is complex. In that context, we propose a sensor-dependent evaluation of the apparent change of extrinsic parameters through the acquisition of a set of images prior to the experiment. These images must be taken in the same configuration as for the experiment, and the sample has to be static.
Once the pairing is identified, the camera model can be applied to obtain displacements from the images of this static shot. An apparent change of extrinsic parameters can then be evaluated from these fields. In practice, for each sensor, the parameters are obtained from a simple least squares projection of these displacement fields onto a sixthorder Zernike polynomial basis. The camera model parameters are then updated in an additive manner using the parameters obtained from this projection. Let us remark that this correction methodology relies on the assumption that the displacements, captured on a static sequence, are solely induced by the change of physically extrinsic parameters (the respective position between the camera and the sample) and that the change actually induced on the images is more complex (to be captured with the full sixth-order Zernike polynomial basis).

| Validation of the model: Imposed translation
In order to validate the model constructed previously, controlled translations along the X direction have been imposed to the sample using positioning stages (see Figure 11). The controlled translations have been imposed along the X axis from À2.5 to 2.5 mm with a step of 0.5 mm. Since the positioning stages have a 10 μm graduation, the systematic error on the imposed displacement is estimated to be 5 μm. Furthermore, the alignment between the focal plane and the sample has been ensured using a laser set-up. Using this set-up, the misalignment error is estimated to be about 0.1 . The images were acquired with the camera at 100,000 fps. In addition, a set of images were taken with no displacement imposed. These images are used in order to correct the change of the extrinsic parameters, as explained previously. Once the correct permutation of the model is identified, the displacement and strain fields can be recovered, as well as the resulting errors ( Table 2). Given that only translations are imposed on the sample, the strains should be zero. The strains obtained remain of the same order of magnitude of those obtained previously (see Section 4.3). Focusing on displacement and applying the appropriate pixel to millimetre ratio, here 33.7 μm/pixel, imposed displacements are T A B L E 1 Errors obtained for all of the shots taken at 480,000 fps, using a sixth-order Zernike basis and both single-shot and global camera models While the random error is close to that obtained previously on stationary images, the systematic error increased slightly. While a significant part of the errors (at least along the X direction) can be explained by the uncertainty of the stage positioning (5 μm), the origin of the systematic error in the Y direction remains unclear. A tiny play within the threeangle rotation stage composing the experimental set-up (see Figure 11) may explain part of the result, but no obvious clue has been found. While errors obtained following the proposed methodology are apparently close to those obtained in the literature, [13] it is interesting to look at the hidden error implicitly induced in previous works when assuming an  additive composition of the distortion and the effective displacement. In this work, the composition equation is solved to recover the effective displacement from the knowledge of the distortion and the total displacement (Equation 15). Using Figure 2 and Equation (18), the difference between the two methodologies can be quantitatively assessed. As an illustration, Figure 12 depicts an example of displacement fields and the resulting strain fields obtained using either a composition scheme or an additive scheme, noted respectively as u r and u ss , for an imposed displacement of 2500 μm. First, let us remark that the use of an additive scheme leads to heterogeneous displacement fields, with amplitude of about 5 pixels. Furthermore, since the displacement fields obtained using an additive scheme are heterogeneous over the sensor, they will lead to errors in the strain fields. For example, this lead to errors of about 10 mm m À1 (or 1%) on the axial strain, which is 5 times higher than the random error discussed in the previous section. Even more critical, such strain field errors committed when assuming an additive composition are not systematic errors, but are heterogeneous over the sensors, which could strongly affect the analysis of the data. This demonstrates the relevance of the composition assumption in order to obtain the effective displacement and correct strain fields. The importance of this step will be further demonstrated in Section 5, where the transformation applied to the sample is no longer homogeneous. It is important to underline here that the impact of the two additive steps introduced within our methodology to account for extrinsic parameters and printing bias has not been evaluated. Nevertheless, both are associated with tiny displacement fields compared to the camera distortion itself (see Figure 9); thus, it will introduce minor errors.
In the end, a global model has been constructed that is able to model the distortions for a given speed, for a given sensor and for a given mirror permutation. By recording a reference sequence prior to the test in its configuration, the change of extrinsic parameters can be accounted for with a correction. The measurement random error obtained with this methodology, for imposed translations, is about 0.2 pixels for the displacement and 2.0 mm m À1 for the strain. The global procedure and the errors obtained at each step are summarised in Figure 13. These values are rather high compared to standards in DIC using mono-sensor technologies: 10 À3 pixels for the displacement and 0.1 mm m À1 for the strain when using a Shimadzu HPV-X. [22] However, these results are promising, since the imaging technology used allows images with unparalleled image resolution (8 megapixels) to be recorded at ultra-high speeds.
F I G U R E 1 2 Comparison of the displacement fields obtained using a compositional approach u r ð Þ or an additive approach u ss ð Þ and their corresponding strain fields obtained for an imposed displacement of 2500 μm (74.18 pixels) for a given sensor F I G U R E 1 3 Schematic representation of the calibration and experimental procedures used to obtain quantitative kinematic data using a Cordin-580

| EXPERIMENTAL APPLICATION
In the previous section, a calibration strategy was presented and preliminary validation tests were performed. The calibration methodology will now be applied to an impact test partly inspired on the one performed by Kalthoff [23] (see Figure 14). The experimental set-up and the test results will be presented in this section.

| Specimen material and geometry
For this experiment, a sample made of PMMA of the brand Altuglas CN manufactured by Arkema is used. It is an amorphous thermoplastic polymer with a Vicat B 50 softening point at 115 C. Its mechanical properties are known to be dependent on the temperature and the strain rate. They have been extensively studied, and constitutive models have been developed to describe its viscoelastic behaviour. [24,25] The specimen used is a 100 Â 75 Â 5-mm PMMA sample, with a 37.5 -mm-long notch at half its height (see Figure 15). The sample and the notch are obtained using a laser cutting machine. The size of its beam is approximately 200 μm. Hence, the width of the notch is estimated to be of the same size. Sample dimensions have been chosen to match the Cordin-580 images aspect ratio, with a slight margin to be able to keep the sample boundaries in the frame all along the recording. Finally, in order to apply the DIC procedure previously described, the tailored F I G U R E 1 4 Principle of the experimental test F I G U R E 1 5 Specimen with an etched speckle pattern synthetic speckle pattern is carved into the sample using the laser cutting machine. The depth of the engraved speckle pattern is about 50 μm.

| Loading and test configuration
As presented in Figure 14, the test configuration can be described as a purely inertial impact loading. This configuration has been used in a series of recent papers. [22,26,27] It consists in impacting a self-supported flat sample glued on a waveguide by means of a projectile. The waveguide has two purposes: holding the sample in place and shaping the input wave, for instance, mitigating misalignment issues. The true interest of this configuration is the control of the boundary conditions. Indeed, all boundaries are free edges, except for the impact edge, where a smooth pulse is introduced. The sample is then simply loaded by its own acceleration. In the present case, by impacting the sample only along half of its height, a compression wave and a shear wave (at notch tip) are introduced. The applied compression stress in the waveguide can be estimated using the following formula: σ $ ρCV p , where ρ is the density of the material, C is its wave celerity and V p is the projectile speed. In our case, using ρ ¼ 1190kg=m, C ¼ 2150 m s À1 and V p ¼ 46 m s À1 , it yields an estimated pulse stress of 118 MPa, that is, on the order of the expected intermediate strain-rate tensile strength (≈100 MPa). [28] The impact is performed by propelling a cylinder projectile, made of polyoxymethylene (POM), at high speed with a compressed air gun. The projectile has a diameter of 40 mm and is 80 mm long. The projectile's speed is controlled by the pressure imposed in the gas gun. In this test, a pressure of 0.20 MPa is used, which leads to an approximate speed of 46 m s À1 . The length of the waveguide, which has the same dimension as the projectile, has been carefully chosen to ensure that no reflected wave enters the specimen before the crack starts.
F I G U R E 1 6 Experimental set-up Given that the alignment between the projectile and the specimen is critical in this kind of experiment, the specimen is placed as close as possible to the air gun exit. Moreover, the alignment is checked each time before the test by introducing a long dummy projectile at the gas gun end and verifying that the contact with the waveguide is planar. Figure 16 presents different views of the experimental set-up and the specimen in place.

| Experimental set-up
The event is recorded using an ultra-high-speed camera, the Cordin-580, equipped with a 90-mm Tamron objective, at 480,000 fps with a CCD gain of 35% and a CDS gain of 0 dB. At such speed, the film duration is about 167 μs. In order to provide enough light, two Pro-10 flashes from Profoto are used. They are set in freeze mode, at 8.5 f-stops. In that configuration, the illumination typically lasts 1 ms.
To trigger the flashes and the camera, an infrared light-gate system is used (SPX1189 series Honeywell). When obscured by the projectile, the optical system sends a 5-V TTL signal to the camera after a delay of 200 μs, which triggers the flashes immediately and itself after 170 μs. This light gate is thus placed as close as possible to the barrel's exit. The specimen is then placed in a manner that the waveguide is 20 mm after the optical barrier, so that the Cordin starts recording when the waves enter the specimen. Let us note that the precision of the specimen's positioning is estimated to be about 2 mm. The 170-μs delay is set manually within the camera, estimating a priori the projectile flight time and the duration of the wave propagation within the waveguide. This methodology is extremely dependent on the reliability of the gas gun to propel at a specific speed. Considering our equipment, the triggering reliability is about 40 to 50 μs, which is high compared to the recording length (167 μs). Nevertheless, improvement of the triggering is in progress for future work.

| Mesh and DIC parameters
Since a crack is propagating in the sample during the test, a specific mesh is used. It is an unstructured mesh, with twin nodes along the crack path (see Figure 17). The use of twin nodes allows the mesh to open in order to properly capture displacement jumps and strain localisation at the crack location. The definition of such a mesh is done in two steps. First, DIC is performed on a continuous and structured mesh. This allows for transferring deformed images of the F I G U R E 1 7 Unstructured mesh (in red) with twin nodes along the crack path, deformed and superimposed onto the final frame taken during an impact test sample in the reference configuration. Then, the crack path can be defined in the undeformed configuration, a new unstructured mesh can be made and the DIC run again. Figure 17 presents the mesh, deformed and superimposed, in the last image of the test. The element size, defining the kinematic resolution, is 32 pixels on average but less along the crack (about 20 pixels). A Tikhonov regularisation of four elements is used to filter out spatial noise. This is achieved, within the DIC framework, by adding a penalty term in Equation (14). Finally, the pixel size, obtained by recording an image of a ruler prior to the test, is 33.7 μm. This leads to a field of view (FOV) of 83.3 Â 111.1 mm (Table 3).
Prior to computing time derivatives, it is also usual to filter out temporal noise at least when a simple finitedifference scheme is used. In order to capture strain-rate fields, displacements are firstly pointwise convolved with a temporal Gaussian filtering kernel with a window size equal to 25 frames. The size of the Gaussian kernel is chosen manually, in order to obtain smooth first and second time derivatives of the displacement. Then strain rates are obtained using a simple first-order finite-difference scheme. Such data filtering marginally affects strain but significantly decreases the amount of noise on strain rates. An estimation of the strain-rate random error is given in Table 4. It is evaluated using the sequence of images, taken prior to the experiment, on a static sample.

| Displacement and strain fields
To deconvolve the distortions and the real displacements, the global camera model built earlier for an acquisition rate of 480,000 fps is used. At this step, the pairing between the sensors and the mirror faces illuminating them is unknown. It requires the application of the three different parameter permutations to the experimental data. Using the same optical considerations previously mentioned, the optimal pairing is obtained by considering the displacement in the Y direction. Furthermore, using the reference shot performed before the test with the sample remaining static, changes in extrinsic parameters between the calibration procedure and the experiment (see Section 4.3) are evaluated. Once the correct pairing is identified and the model parameters are updated, displacement, strain and strain-rate fields can be extracted. Let us first look at the temporal evolution of the axial displacement and speed of a node located at the middle of the impacted edge. Figure 18 shows displacement in blue and speed in red. The four vertical dashed lines are the time steps for which associated fields will be discussed later on. The two black circles depict, respectively, the crack initiation and crack branching time steps. The loading of the specimen induces, at the impact edge, a displacement ramp starting at 225 μs and reaching about 3 mm at the end of the record. Note that it corresponds to the range investigated above, in the method validation (Section 4.5). The speed evidences three stages: between 170 and 210 μs, the velocity increases in the positive direction, and then there is a nearly linear increase in magnitude followed by a plateau at À40 m s À1 . The first stage can be explained by a clockwise rotation of the sample; this will be described later. The plateau is reached in approximately 60 μs, which corresponds to an acceleration on the order of 10 6 m s À2 . According to Ravi-Chandar et al., [29] such loading leads to a brittle mode crack regime in PMMA. This is also confirmed by the data presented in Figure 19. Figure 19 shows sample images and displacement fields in both directions for the four time steps introduced previously. The whole time history can be found in the Supporting Information. In what follows, the time count starts when the optical barrier is cut; hence, the camera recording starts at 170 μs. During the first 55 μs, the displacement fields obtained are consistent with a clockwise rotation of the sample. This rotation can be explained by considering that, at the time of the impact, the sample and waveguide were slightly misaligned with a small anticlockwise angle. Thus, when the impact occurs, the waveguide will rotate. This initial inclination is thought to be induced by the air blast preceding the projectile, due to the lack of air exit in the barrel's nozzle. This is verified by comparing a frame of a calibration shot taken before the test to its corresponding frame of the test; a rotation of 0.7 is found. The displacement field in the X direction at 235 μs further confirms this explanation, since the compression starts at the bottom corner of the impacted edge. Notice that it is not an issue, since the whole kinematic field history is captured. Then, as the wave enters the sample, the Poisson effect due to the compression is captured in the Y direction, as shown in Figure 19 at t ¼ 276:98 μs. Notice that, at that stage, the crack has already initiated, which is highlighted by the opening of the mesh. This result shows that the crack initiates as the compression wave goes into the sample and not after any wave rebound. A proper capture of the displacement jump from crack initiation to branching is recovered. Branching of the crack is visible along both the X and Y directions at 330.84 μs. In line with Ravi-Chandar et al., [29] the crack propagates globally with a 60 inclination.
Furthermore, strain fields can be obtained, and Figure 20 depicts some of them. The strain fields obtained at 214.84 μs confirm the fact that the displacement fields obtained in the early stages of the experiment are due to a F I G U R E 1 8 Temporal evolution of the displacement and speed in the X direction of a node located in the middle of the impacted edge rigid-body rotation of the sample. The compression wave in the X direction and the Poisson effect induced are captured by the strain fields (e.g., at 276.98 μs). The sample undergoes, on its top right part, a uniaxial compression of 30 mm m À1 . Under the assumption that only compression is taking place in this part of the sample, a Poisson ratio of ν = 0.38 can be obtained. It is computed by averaging the ratios over a small vertical band, about two elements wide, located 18.5 mm away from the impacted edge. This spatially averaged ratio is then averaged in time between the 42nd and 65th frames (i.e., between 260 and 310 μs). This value is in line with the Poisson ratio obtained in the literature [30] and the one given by the manufacturer (0.39). At the same time, ahead of the notch tip, the sample undergoes shear strain concentration, but the crack does not propagate in this direction. It is interesting to note that at higher impact speeds (55 m s À1 ), and in less fragile material such as polycarbonate, the crack would propagate horizontally [29] within this shear region. Classically, this kind of propagation is associated with shear band formation. Performing such temporally and spatially resolved analysis for different impact speeds would certainly increase our understanding of the origin of such a Mode I/shear band fracture mechanism transition. Additionally, the axial strain-rate fields are depicted in Figure 20. During the experiment, the axial strain rate reaches 600 s À1 . Locally, at the crack tip, the axial strain rate is even higher than 1000 s À1 .
Finally, similarly to what has been done in Section 4.5, the effective displacement fields obtained when using a composition scheme can be compared to those that would have been obtained assuming an additive decomposition. The differences, which are the errors introduced when using a sensor-to-sensor approach, obtained in both directions for the last image taken during the test, and the errors induced on the strain field, are depicted in Figure 21. From Equation (18), we know that these errors are, in a first-order approximation, linked to the effective displacement, the distortion's gradient and the difference between the distortions from the experiment and those from the calibration. The discrepancy has an amplitude of 6 pixels, which represents approximately 10% of the final displacements obtained during the experiment. Moreover, let us note that the differences obtained in the lower right part of the sample have an amplitude of 1 pixel. This can be explained by the fact that this part is subjected to nearly no deformation. Indeed, the differences are higher in the heavily translated and deformed parts of the sample. This is why the differences are highlighted in the regions where the effective displacement is important. Furthermore, the strain errors induced have an amplitude of about 15 mm m À1 , which is the same order of magnitude of the strains obtained during the experiment. This further demonstrates the necessity to correctly model the distortions. Furthermore, it justifies the need to F I G U R E 1 9 Undistorted images and displacement fields obtained during an impact test for different time steps F I G U R E 2 0 Strain and strain-rate fields obtained during an impact test for different time steps compute the kinematic data in the undistorted reference configuration. Otherwise, strong errors and both strain levels and strain localisation could be done.

| OUTSTANDING ISSUES AND SCOPE OF THE METHOD
Although the feasibility of a very high spatial and temporal sampling DIC measurement based on the multi-sensor and rotating mirror technology has clearly been evidenced within this work, it is important to understand the limitations and outstanding issues of the proposed methodology. A few points need to be raised.
Even if the paper attempts to propose a general camera model that could be built once for all in the laboratory and then probably occasionally checked and adjusted to take into account any camera behaviour variation over time, the present work has focused on a specific lens and FOV. However, the lens is part of the distortion chain, and no attempt to deconvolve its contribution from the camera contribution has been done. A series of measurements for different magnifications have been performed (not presented here), and results evidence a variation of the camera model parameters. Nevertheless, integrating a lens parameterisation would probably require a deviation from a simple phenomenological and polynomial camera model to a physical one introducing ray tracing, which is beyond the scope of the present paper. In that context, the present methodology requires, in practice, the realisation of the calibration under experimental conditions (speed, FOV and lens), thus doing it every time the set-up configuration changes.
In line with the previous point, the impact of the mirror rotation speed has been evidenced, but no obvious parameterisation has emerged from the data. The progressive deformation of the mirror when increasing speed, due to F I G U R E 2 1 Estimation of the displacement errors committed when using a sensor-to-sensor (or additive) approach (ϵ in Equation 18) and the associated strain errors centrifugal effects, would have logically mainly implied an increasing vertical compression of the image, but the data show much more complex variations with a clear trend transition beyond 300,000 fps. In that context, the present camera model does not allow for extrapolating parameters over acquisition speeds. Only calibrated speeds, in our case 100,000, 200,000 300,000, 400,000 and 480,000 fps, can be used for material testing. This point is critical, since the short-term main goal is to perform tests beyond a million fps. However, at that speed, the helium drive is required and only five to six shots can be run with a 50 L bottle. This implies that at least two bottles have to be used to run a test, one and a half for calibration, plus one or two tests. This introduces significant additional cost, but more importantly, requires long camera run times at very high speed, which has strong impact on the mirror-bearing lifetime. A way of mitigating that would be to avoid changing the set-up configuration. In that case, this material and supply over-cost would be reduced to a single calibration campaign.
All of the results presented in this work are based on a single, non-optimised, speckle pattern. The characteristic period of the pattern, 32 pixels, has been chosen to account for the weak sharpness of the images, which are blurred over about 20 pixels. Such blurring is partly due to the camera itself, where the light reaches each sensor with a different reflected angle up to very shallow ones. Secondly, each sensor has its own focusing system, which is tuned manually in factory. We are currently developing an automated strategy to individually optimise such a focus. If a significant sharpness improvement can be achieved, a finer DIC mesh could be used and better performances could be reached. Thus, it is important to notice that the results provided here are speckle dependent.

| CONCLUSIONS AND PERSPECTIVES
In this paper, a dedicated calibration methodology for a multi-sensor rotating mirror ultra-high-speed camera is presented. The accuracy of the method has been assessed first using images from a static sample and then using images after an imposed translation. A pre-notched sample of PMMA has then been subjected to an inertial impact test. Kinematic full-field data have been obtained and have quantitatively captured the events during the test. The main conclusions are as follows: -Since complex and non-negligible distortions are induced by the optical apparatus of the multi-sensor rotating mirror camera, they need to be modelled and corrected. These distortions are modelled with Zernike polynomials and identified using DIC and a synthetic speckle pattern. -The error usually made when using a sensor-to-sensor approach is given, in a first-order approximation, by the following equation: ϵ ≈ À r u d2 X ð ÞÁu r X ð Þþu d1 X ð ÞÀu d2 X ð Þ. Hence, three requirements are needed to use this strategy: the distortions have to be constant from one shot to another and small; the effective displacements also have to be small. When using a Cordin-580, the first two conditions are not met. Furthermore, during an impact test such as the one presented in this work, the last condition is not fulfilled either. It follows that the use of such an approach would lead to a displacement error proportional to the displacement of the sample and the distortion gradient. For example, for a translation of 2.5 mm (74.2 pixels), an error of nearly 0.2 mm (6 pixels) is obtained on the displacement, which induces strain errors of about 15 mm m À1 . This highlights the great interest of using the proposed methodology.
-The proposed methodology achieved statistical accuracy, over shots, of 0.15 ± 0.09 pixels for the displacements and 40 μm m À1 ± 2 mm m À1 for the strains. Applying this dedicated calibration, on a moving sample, using imposed micrometre displacements, an ultimate accuracy of about 0.5 ± 0.2 pixels and 100 μm m À1 ± 2 mm m À1 for the strains was eventually achieved. These errors include algorithmic, camera variability and experimental bias. -Quantitative kinematic full-field data from an inertial impact test on a pre-notched PMMA sample have been acquired at a rate of 480,000 fps. These kinematic fields quantitatively captured the events occurring during the test, such as the compression wave and the Poisson effect induced, or the shear strain concentration at the notch tip. -These kinematic fields obtained have a better spatial sampling than what can traditionally be obtained with classically used high-speed cameras, at the price of higher strain random errors. In the present configuration, we reach a spatial sampling of about 1 mm over an FOV of approximately 100 Â 100 mm.
Despite a calibration shot having to be performed before each experiment for the time being, the results obtained are promising. Recent developments, enabling the reconstruction of stress fields only based on temporarily resolved dynamic kinematic data, [26,31,32] would necessarily take advantage of this massive gain in spatial sampling. In that context, the proposed work potentially opens the way to a quantitative analysis of the local and transient mechanical response of materials when submitted to high strain-rate experiments.