An evaluation of the data space dimension in phase retrieval: results in Fresnel zone

In this paper, we address the problem of computing the dimension of data space in phase retrieval problem. Starting from the quadratic formulation of the phase retrieval,the analysis is performed in two steps. First, we exploit the lifting technique to obtain a linear representation of the data. Later, we evaluate the dimension of data space by computing analytically the number of relevant singular values of the linear operator that represents the data. The study is done with reference to a 2D scalar geometry consisting of an electric current strip whose square amplitude of the electric radiated field is observed on a twodimensional extended domain in Fresnel zone.


Introduction
Phase retrieval techniques find applications in all the contexts where phase information is not available.In electromagnetism, they arise in antenna or array diagnostics, in the reconstruction of the far-field pattern from near-zone data (phaseless near-field far-field techniques) [1,2], and in inverse scattering problem.From the mathematical point of view, the lack of phase information makes the problem non-linear and this complicates the task of finding a solution.Over the years, different numerical procedures to address the problem have been proposed; some of them exploit the amplitude formulation, instead, others are based on the square amplitude formulation.The latter consists in retrieving the unknown function f ∈ X from the quadratic model below where T : f ∈ X −→ g ∈ Y .
The most common techniques to tackle the problem exploit a least-square minimization.However, since the cost functional to be minimized is not quadratic, trap points may occur.The latter may avoid reaching the actual solution of the problem even if the uniqueness conditions are satisfied.
To overcome this drawback, the lifting technique can be used [3].The latter, starting from the quadratic formulation in (1), exploits a redefinition of unknown space to recast the phase retrieval problem as a linear one.Despite this, the new unknown function belongs to a functional space whose dimensions are the square of those of the original unknowns space.Consequently, for large-scale problems the lifting approach is not feasible and the phase retrieval problem must be necessarily addressed by recurring to nonconvex formulations.
In this framework, avoiding trap points is the main task.
From this point of view, the least-squares minimization based on the square amplitude formulation brings to a cost functional that is smoother that obtained by considering amplitude formulation.Furthermore, the quadratic formulation of phase retrieval problem allows a deep analysis of the genesis of local minima and allows finding strategies to "cure" them.In particular, it has been shown that if the ratio between the dimension of data space (M) and the dimension of unknowns space (N) is high enough, no trap points appear in the functional to minimize [4,5,6].From this discussion, it is evident that the dimension of data space plays a key role in phase retrieval via quadratic approach; hence, it is worth investigating how to evaluate it from an analytical point of view.As shown in [3], the dimension of data space can be evaluated by counting the number of significant singular values of the lifting operator.However, to the best of our knowledge, analytical results concerning the singular values behavior of lifting operator are not available in literature.
For such reason, with reference to a 2D scalar geometry, we will provide a closed-form expression of the number of significant singular values of the pertinent lifting operator.

Geometry of the problem and preliminary results
In this paper, we consider the 2D scalar geometry depicted in fig. 1 where the y-axis represents the axis of invariance.An electric current J(x) = J(x) îy supported on the set [−a, a] of the x-axis radiates within a homogeneous medium with wavenumber β .The electric field E radiated by such strip source has one component directed along the îy ; hence, E(r, θ ) = E(r, θ ) îy .The square amplitude of the radiated electric field |E| 2 is observed in Fresnel zone on a two-dimensional domain that extends along the polar coordinates (r, θ ) on the set For the geometry at hand, the radiated electric field can by expressed in the variables r and u = sin(θ ) by the equation where T is the linear integral operator that realizes the following mapping Under the paraxial Fresnel approximation, the operator T can be explicitly written as in [7] in the form e jβ ux J(x)dx (4)

The lifting operator and its singular values
In this section, first we provide a linear representation of |E(r, u)| 2 , i.e, the square amplitude of the electric field over the observation domain OD.Later, we find the dimension of data space by evaluating the number of significant singular values of the linear operator which represents the data.
To obtain a linear representation of |E| 2 , let us rewrite the quadratic model |E| 2 = |T J| 2 in the form below with From the last equation, it is evident that if we redefine the unknown space and we consider as unknown the function F(x, x) = J(x) J * (x) then the operator which links the unknown function F(x, x) with the data function |E(r, u)| 2 is linear.Such operator is known in literature as lifting operator and it is defined as where A weighted adjoint operator A † w is given by where w(x, x) is a weight function, and (•) denotes the function of the variables (r, u) on which the adjoint operator acts.
Naturally, the presence of the weight function changes the dynamics of the singular values of the lifting operator A. Despite this, we will show that the number of relevant singular values remains unchanged.For such reason, we can tolerate the changes in the singular values behavior brought by the weight function.
To evaluate the number of significant singular values of the lifting operator, we will find the eigenvalues λ m of the operator AA † w .The latter can be expressed as where With the aim to compute the integral (10) in a very simple way, let us divide the integration domain D as where The set D 2 is a null set with respect to the Lebesgue measure; consequently, the kernel can be computed by performing the integration only on D 1 .
With the aim to evaluate H(r, r o , u, u o ), let us perform the change of variables which is injective and continuously differentiable on D 1 .By virtue of (11), the kernel of AA † w can be recast as denotes the Jacobian determinant of the transformation, • D1 indicates the domain in which the original integration domain D 1 is mapped by (11).
Note that, despite the Jacobian determinant is singular for X 1 = 0, such point does not belong to the integration domain D1 .Hence, no singularity appears in the integral (12).Now, if we choose According to equation ( 14), the integration should be done on the set D1 which is sketched in fig. 2. However, since we want to recast the operator AA † w in a form whose eigenvalues are known in closed-form, we will approximate H(r, r o , u, u o ) by integrating on the smaller domain that encloses the set D1 .The latter is made up by all the points (X 1 , X 2 ) belonging to the rectangular set [−2a, 2a] × [−a 2 /r max , a 2 /r max ] except for the point (0, 0).By performing the integration on such domain, we have that Accordingly, the operator AA † w can be expressed as From ( 16), it is evident that the operator AA † w becomes more similar to a convolution operator if we set s = r max /r.In fact, by doing this, we have that Despite the previous operator is only an approximation of AA † w , the positive aspect is that its eigenvalues are known in closed-form.It is worth noting that the operator (17) has been obtained by exploiting two changes of variables.The idea of exploiting a change of variable in such a way that the kernel of the considered operator assumes a desired form has been exploited also in other recent works [8].In order to compute the eigenvalues of (17), we must solve the eigenvalue problem where v m represents the m − th eigenfunctions of AA † w .By fixing ṽm (s, u) = v m (s, u)/s the eigenvalue problem above can be recast as 8a 3 The eigenvalues of (19) are known in closed-form.In fact, according to [9], they are given by the equation where λ it results that the eigenvalues of the problem (19) and, consequently, the eigenvalues of the operator (17) are significant until to the index M = M u M s .Let us remember that the kernel of the operator in (17) has been obtained by integrating on the smaller rectangular that encloses D1 ; for such reason, M is not exactly equal to the number of relevant eigenvalues of AA † w but it represents an upper bound.Until now, we have focused on the eigenvalues of AA † w ; instead, the actual singular values of A are related to the eigenvalues of AA † by the equation σ m (A) = λ m (AA † ) where A † denotes the usual adjoint operator defined without the weight function.For such reason, we know only an approximation of the singular values of A that is given by the square root of the eigenvalues of AA † w .In the next section, by means of some simulations, we will check that the actual singular values of A and their approximated version become negligible at the same index.This verification allows to state that for the considered geometry, the number of significant singular values of the lifting operator or in other words the dimension of data space M satisfies the inequality 4 Numerical results In this section, we check that actual singular values of A and their approximations become negligible at the same index.As test case, we consider the configuration in which a = 10λ , u max = 0.5, r min = 25λ (s max = 4), r max = 100λ (s min = 1).With reference to such configuration, in fig. 3 we have sketched the actual singular values of A and their approximated versions in dB.In particular, the blue, red and black diagrams sketch respectively • the square root of the eigenvalues of the approximated version of AA † w provided by (17), • the square root of the eigenvalues of AA † w , • the square root of the eigenvalues of AA † .
As can be seen from fig. 3, the square root of the eigenvalues of the operator (17) exhibits a multi-step behavior and they are relevant until to the index M = M u M s = 164.The multi-step behavior can be understood if we remember that the eigenvalues of such an operator are given by (20).Now, as shown in fig.4, the sequence {λ (u) m 1 } have a steplike behavior, instead, the sequence {λ (s) m 2 } is not exactly step-like.This automatically implies that the eigenvalues of the operator (17) have a multi-step behavior also before the index M = 164 and, consequently, also their square root.However, our aim is to forecast the critical index at which the actual singular values of A become negligible.By observing the behavior of the actual singular values (black diagram in fig.3), it is evident that the singular values beyond the index M = 164 are surely negligible while those before are almost all significant if the noise level is not so high.This implies that the use of the weighted adjoint changes only the dynamics of the singular values but not the critical index at which they become negligible.For such reason, we can state that M = M u M s is an upper bound for the dimension of data space that is very close to its actual value.

Conclusion
In this article, we have addressed the problem of evaluating the dimension of data space in phase retrieval.In particular, with reference to a 2D geometry consisting of a strip current observed on a two dimensional observation domain, we first have introduced a linear operator that represents the square amplitude of the radiated field.After, studying the singular values of such an operator, we have provided an upper bound for the dimension of data space which is very near to its actual value.

Figure 1 .
Figure 1.Geometry of the problem
the eigenvalues of the Slepian-Pollak operators whose kernels are respectively sinc ( 2β a (s o − s) ) and sinc β a 2 2r max (s o − s) .Since the sequences {λ (u) m 1 } and {λ (s) m 2 } are relevant respectively until to the indexes

Figure 3 .
Figure 3. Singular values of A, and their approximated versions in dB.