### Orthogonal Polynomials via the Riemann-Hilbert Problem

In this post, I would like to introduce one extremely lucrative way of characterizing orthogonal polynomials, the Riemann-Hilbert problem (RHP). Our input data consists of two things: a contour and a jump matrix. More precisely, given an oriented contour $\Sigma \in \mathbb{C}$ ($+/-$ side is to the left/right, respectively, with respect to given orientation), let $\Sigma^\circ := \Sigma \setminus \{ \text{points of self-intersection and end points} \}$. Furthermore, we take as input an $n \times n$ jump matrix $\boldsymbol J(z)$ defined for $z \in \Sigma^{\circ}$. Then, a RHP asks us to find an $n \times n$ matrix-valued function $\boldsymbol M(z)$ which satisfies the following conditions:To solve this problem, we can think of taking a logarithm $g(z) = \log(f(z))$, and solving an auxiliary RHP

- $\boldsymbol M(z)$ is analytic (entry-wise) in $\mathbb{C} \setminus \Sigma$ and,
- $\boldsymbol M_+(x) = \boldsymbol M_-(x) \boldsymbol J(x)$ for all $x \in \Sigma^\circ$, where $\boldsymbol M_{\pm}(x) = \displaystyle \lim_{\pm \text{ side }\ni x' \to x \in \Sigma} M_{\pm}(x')$.

Often times, a normalization condition is tacked on to ensure uniqueness of solution. Indeed, note that if $\boldsymbol M(z)$ satisfies the above conditions, then so does $\boldsymbol E(z) \boldsymbol M(z)$ for any entire matrix $\boldsymbol E(z)$.

## scalar case

Let's try out a simple version of this problem by considering the $n = 1$ case on the real line, which is a scalar boundary-value problem. Given a weight $w(z) \in L_1(\mathbb{R}), \ z \in \mathbb{R}$ which is Hölder continuous and non-vanishing on $\mathbb{R}$, we're looking for a function $f(z)$ satisfying- $f(z)$ analytic in $\mathbb{C} \setminus \mathbb{R}$,
- $f_+(x) = f_-(x) w(x)$ for all $x \in \mathbb{R}$,
- $f(x) = 1 + \mathcal{O}(1/z)$ as $z \to \infty$.

- $g(z)$ analytic in $\mathbb{C} \setminus \mathbb{R}$,
- $g_+(x) = g_-(x) + \log(w(x))$ for all $x \in \mathbb{R}$,
- $g(x) = \mathcal{O}(1/z)$ as $z \to \infty$.

where $\log(w(x))$ is single-valued and Hölder continuous on $\mathbb{R}$ as well! We will see in a second that the choice of branch of the logarithm is not important for us. With the help of the Plemelj-Sokhotski formulas (a proof of which can be found in Gakhov's

*Boundary Value Problems*), we find that the Cauchy transform \[ g(z) = \dfrac{1}{2 \pi i} \int_{\mathbb{R}} \dfrac{\log(w(x))}{z - x} dx\] solves the above RHP, and so we take $ f(z) = e^{g(z)}$, we have solved our problem. Unfortunately, a similar procedure can't work for $n>1$ for the lack of a matrix-valued logarithm.## RHP for orthogonal polynomials

That being said, we

*can*solve some RHPs. One such problem, discovered by Fokas, Its, and Kitaev, is the RHP characterization for orthogonal polynomials. Given a weight function $w(z)$ such that $x^j w(x) \in L_1(\mathbb{R})$ for $j = 0, 1, 2,...$, we seek a $2 \times 2$ matrix $\boldsymbol Y_n(z)$ which satisfies (henceforth denoted RHP-$\boldsymbol Y_n$)- $\boldsymbol Y_n(z)$ is analytic in $\mathbb{C} \setminus \mathbb{R}$ and,
- $\boldsymbol Y_{n,+}(x) = \boldsymbol Y_{n,-}(x) \left(\begin{matrix} 1 & w(x) \\ 0 & 1 \end{matrix} \right)$ for all $x \in \mathbb{R}$,
- $\boldsymbol Y_n = (\boldsymbol I + \mathcal {O}_n(1/z))\left(\begin{matrix} z^n & 0 \\ 0 &z^{-n} \end{matrix} \right) $ as $z \to \infty$.

It turns out that this RHP is uniquely solvable by the matrix \[ \boldsymbol Y_n(z) = \left( \begin{matrix} P_{n}(z) & \dfrac{1}{2\pi i} \displaystyle \int_{\mathbb{R}} \dfrac{(P_n w)(x)}{z - x} dx \\ Q_{n - 1}(z) & \dfrac{1}{2\pi i} \displaystyle \int_{\mathbb{R}} \dfrac{(Q_{n-1} w)(x)}{z - x} dx \end{matrix} \right), \quad Q_{n}(z) = -\dfrac{2\pi i}{h_{n}}P_{n}(z), \]

where polynomial $P_n(z)$ is the monic orthogonal polynomial

*of degree*$n$ with respect to weight $w$, i.e. they satisfy \[ \int_{\mathbb{R}} x^kP_n(x) w(x) \ dx = 0 \quad \text{ for } k = 0, 1, ..., n - 1, \quad h_n = \int_{\mathbb{R}} P_n^2(x) w(x) \ dx.\] Indeed, since the jump matrix is upper triangular, the first column of $\boldsymbol Y_n$ is analytic in $\mathbb{C}$ and grows polynomially at infinity, and so consists of polynomials. Furthermore, the second column experiences jumps akin to those in the RHP for $g(z)$ above, hence the appearance of Cauchy transforms. The orthogonality relation guarantees that the second column behaves like $z^{-n}$ as $z \to \infty$. This solution is unique, and encodes a lot of information about the orthogonal polynomials.## three-term recurrence relation via RHP

We observed early on that the first two conditions of RHP-$\boldsymbol Y_n$ are preserved when multiplying on the left by an entire matrix $\boldsymbol E(z)$, so let's try it out. In particular, can one use this to relate $\boldsymbol Y_n(z)$ and $\boldsymbol Y_{n-1}(z)$, since they have the same jump which is independent of $n$. The condition we will need to pay attention to is the third one, so let's write \[ \boldsymbol Y_{n}(z)= \left(\boldsymbol I + \dfrac{1}{z} \left( \begin{matrix}a_n & b_n \\ c_n & d_n \end{matrix} \right) + \mathcal{O}_n \left( \dfrac{1}{z^2} \right)\right) \left(\begin{matrix} z^n & 0 \\ 0 &z^{-n} \end{matrix} \right) \quad \text{ as } \quad z \to \infty. \] Writing the last matrix as $\mathsf{diag}(z, z^{-1})\cdot\mathsf{diag}(z^{n-1}, z^{-(n-1)})$ and factoring the first term through yields (I did this in mathematica) \[\boldsymbol Y_n(z)= \left( \begin{matrix} z- a_{n - 1} + a_n & -b_{n - 1} \\ c_n & 0 \end{matrix} \right) \left(\boldsymbol I + \dfrac{1}{z} \left( \begin{matrix}a_{n - 1} & b_{n - 1} \\ c_{n - 1} & d_{n - 1} \end{matrix} \right) + \mathcal{O}_n \left( \dfrac{1}{z^2} \right)\right) \left(\begin{matrix} z^{n - 1} & 0 \\ 0 &z^{-(n - 1)} \end{matrix} \right) \] as $z \to \infty$. By uniqueness of the solution of RHP-$\boldsymbol Y_{n-1}$, we arrive at

\[\boldsymbol Y_{n}(z) = \left( \begin{matrix} z- a_{n - 1} + a_n & -b_{n - 1} \\ c_n & 0 \end{matrix} \right) \boldsymbol Y_{n-1}(z).\]In particular, comparing entry (1, 1) in the equation above yields \[ P_n(z) = (z - a_{n-1}+a_n)P_{n-1}(z) -b_{n-1} Q_{n-2}(z) . \]Setting $a_{n-1} - a_n = \alpha_n, \ \dfrac{2\pi i b_{n-1}}{h_{n}} = \beta_n$ yields the usual three-term recurrence relation \[ P_n(z) = (z-\alpha_n)P_{n-1}(z) -\beta_nP_{n-2}(z).\]

## why this?

While this isn't a terribly natural characterization of orthogonal polynomials, it turns out to be an extremely useful one. An asymptotic formula for $\boldsymbol Y_n$ gives us asymptotic formulas for the orthogonal polynomials, their Cauchy transforms

*and*recurrence coefficients. This kind of asymptotic analysis is possible via Deift and Zhou's nonlinear steepest descent. That being said, the analysis often requires knowledge of fine details of the properties of the weight $w(z)$. Furthermore, in the formula for $\boldsymbol Y_n$, we needed the monic orthogonal polynomial $P_n$ of degree $n$. If the weight $w(z)$ is not positive, then the orthogonal polynomial of degree $n$ may not be the $n$th orthogonal polynomial, and such a polynomial may not exist at all. This means that the asymptotic analysis needs to be done along subsequences where such polynomials do exist. This degeneration has a reasonable interpretation in the setting of Padé approximation, but more on this later!