banner

As previously mentioned, there are different techniques for interferometric analysis and the most automated methods are also the most complex, to the point that their use is avoided for lack of opportunity to understand the calculations performed. The basic idea of satellite differential interferometry is quite obvious-the difference between the two phase images is calculated and recalculated into the offset in the direction of the satellite’s line of sight using geometric constructions for a radar with a known wavelength. If it were not for the need to account for the Doppler effect, the extent of the imaging area (scene), and the terrain scanning method used, this task would be at the level of school arithmetic. But the subsequent processing of individual interferograms and their series requires much more complex mathematics and calculations. This is the part of this work we are going to talk about.

Creating a system of equations

An unwrapped interferogram is the displacement of all spatially coinciding pixels of the images during the time interval between the pair of images used to construct this interferogram. Of course, the displacement can be calculated only for those pixels that are represented in both images of the pair. Moreover, the offset is calculated with some error, which is proportional to the measure of interferogram coherence (set in the range from zero to one). Moreover, even 100% coherence does not guarantee 100% accurate displacement measurements – a typical problem is the change in the optical properties of the atmosphere during the time between images, which leads to a change in the time for the radar beam to pass through the atmosphere, and in the calculations we recalculate this as a false surface displacement, since these changes in the atmosphere properties are not known to us.

In general, to construct an SBAS (Small BAseline Subset) diagram, a perpendicular baseline (measured from the perpendicular between the positions of the satellite at the moments of imaging) and a temporal baseline (time interval between imaging) for pairs of images to obtain potentially high-quality interferograms are constrained.

Things are more interesting with the time constraint. Indeed, sometimes we can get reliable results (high coherence) even with an inter-image interval of about a year, but only for individual image pixels. Thus, by increasing the permissible interval between images, we significantly increase the number of obtained interferograms, while the overwhelming part of their constituent pixels will be unsuitable for further processing due to the loss of coherence. But in this way, we have an opportunity to analyze atmospheric changes over a long time interval and to make appropriate corrections for all the calculated displacements. Since atmospheric changes are rather smooth relative to the spatial detail of the analysis (scale of the order of tens of kilometers), the obtained corrections are suitable for a large territory around each pixel with temporal coherence in months. At the same time, it is usually sufficient to limit the time interval between images to 50 days or so in order to determine exactly the surface displacements. In a notebook on GitHub, I showed examples of calculating errors caused by atmospheric effects, and how simply excluding the corresponding images can greatly improve the results: S1A_Stack_CPGF_T173_TODO.ipynb I note that the calculations here are done for whole interferograms, so this notebook can also be run on Google Colab resources, while the pixel-by-pixel calculations are too resource intensive.

Now consider building a system of equations for all obtained interferograms. Choosing as a unit of time the interval between two consecutive images (12 days), for each pixel of the study area we can write a series of equations showing the shift over a series of time intervals with a given confidence (determined by coherence). Thus, the coherence determines the weighting factor of each equation in the system, namely the whole equation. This will become clearer if we consider the boundary case at zero fidelity (weight coefficient), when the equation is insoluble at all and must be eliminated. At maximum credibility, the equation should be considered with maximum weight in solving the system of equations. To solve a system of equations using the least squares method (LSM), the square roots of the resulting weights must be used to normalize the system of equations (due to the fact that the weight of each equation determines its importance in the solution, and the solution method itself operates on quadratic values).