Chapter 20

Adaptive Optics.

An optical system, like the eye, has many flaws (aberrations) that prevent it from forming a perfectly sharp image. In previous chapters we've looked at how to measure and correct the main flaws that can occur, namely myopia, hyperopia, and astigmatism; that is, problems that lead to defocus of the image. However, even when these are corrected, there remain other, harder to describe, flaws in the way the eye tries to focus light, some of which were mentioned in the blur chapter (Chapter 6). These remaining flaws are less important than the main ones, but they limit our ability to see, and they also limit our ability to examine the interior of the eye in detail.

If we could measure and correct all the optical flaws of the eye, we could have "super-vision" where our eyes are optically perfect, and the only blur remaining is due to diffraction. More useful, however, is that by correcting - even temporarily - all the optical flaws, we can see the retina in even more detail than possible using ordinary optical techniques.

Adaptive optics is a technology that lets us do that. Adaptive optics was originally invented in astronomy, as a way of correcting the optical flaws of the atmosphere - that is, the random turbulence that makes the starts twinkle. However, as the costs of the technology came down, it found uses elsewhere. While "super-vision" is still some way off, and perhaps not even all that desirable, adaptive optics lets us examine the interior of the eye with a detail and clarity that would be impossible with any other method.

Wavefronts and Aberrations.

A wavefront is what we called wave crests in Chapter 19. A wavefront is a line or curve which connects together all points on a wave that are either crests, troughs, or at exactly the same place between a crest and a trough (e.g. halfway between the crest and trough). The shape of the wavefront at one moment is all we need to know about the wave, because we can use the Huygens-Fresnel principle to work out the wavefront shape at successive moments after that.

If we want to create a sharp image, we must create a wavefront which is perfectly circular, as shown in Figure 1(a). Here a perfectly circular wavefront converges, not to a point, because that is impossible in wave optics, but to a tiny region. However, even tiny deviations from perfect circularity cause a failure of proper convergence, resulting in blur, as shown in Figure 1(b)

Figure 1(a) A perfectly circular wavefront converges to the right in this movie. Most of the wave converges to a small area, with only a small amount of the wave outside the convergent area. The size of this area is limited solely by diffraction.
Figure 1(b) This imperfect wavefront looks like it might converge, to begin with, but fails to do so as well as a perfect wavefront. These wavefront imperfections can't be corrected by simple lenses.
Figure 1(c) A perfectly flat wavefront continues unchanged.
Figure 1(d) A non-flat wavefront looks totally different by the time it has reached the right hand side of this movie.

Spectacles can change the average radius of the concentric circles in the eye (by changing the overall convergence or divergence of the light), but they can't correct the random imperfections, seen in the wavefront in Figure 1(b), which are unique to each eye. In fact, spectacles can introduce their own imperfections into the waves passing through them.

Chapter 6 introduced some aberrations that are common in lenses and the human eye. In adaptive optics, we consider an aberration to be any difference between a wavefront and the "perfect" wavefront (either circular or flat) that we require in the given situation.

If we have an imperfect wavefront like that shown in Figure 1(b) or Figure 1(d), and we want to correct it to a perfect wavefront like that in Figure 1(a) or Figure 1(c), we must first measure the imperfections.

Measuring Wavefronts.

In adaptive optics, a wavefront can be measured using a device called a Shack-Hartmann sensor. This uses an array of tiny lenses (often called "lenslets"), arranged in rows, to measure the shape of a wavefront hitting it. The lenslets are placed one focal length from an image screen. To see how a Shack-Hartmann sensor works, we will see what sort of images are formed when various shaped wavefronts hit the sensor.

Although the Shack-Hartmann sensor is designed to measure the shape of wavefronts, it is a lot easier to explain how it works by thinking about light as rays.

Figure 2(a) shows a Shack-Hartmann sensor in action. The sensor is shown front on to the right: it consists of a set of lenslets (light blue circles) in front of an image screen (green rectangle). Lenslets are typically very small, less than \( 0.5\text{mm} \) wide. On the left, a side view of the middle column of lenslets is shown. The lenslets are one focal length away from the image screen.

Figure 2(a) shows what happens when a flat wavefront (i.e. parallel rays of light) hit the sensor. If the waves hit straight on, each lenslet will see a tiny bundle of parallel rays (a flat wave), and focus it to a point on the image screen. The image on the screen will consist of a set of dots, with each dot centred behind each lenslet.

If the flat waves hit at an angle (Figure 2(b)), the screen will still show a set of dots, but they will be shifted to one side or the other. This is really the important point here: when a bundle of rays (or a small wave) hits a lenslet at an angle, it will cause the lenslet's image dot to shift away from the centre of the screen below the lenslet.

fig2
Figure 2(a) A Shack-Hartmann Sensor consists of an array of lenslets (tiny lenses, drawn light blue) and an image screen (drawn in green). The right hand diagram shows the lenslets arranged in rows and columns in front of an image screen. The left hand diagram shows the middle column of lenslets from the side. The distance between the lenslets and the image screen is one focal length.

On the left, parallel light rays hit the lenslet array straight on. The rays correspond to a flat wavefront shown as a blurred line. Each lenslet focuses the rays hitting it to a point directly behind the lens. On the right, the focussed points are shown drawn on top of the lenslet array. This pattern of dots - equally spaced and centred - tells us that the rays are parallel.

fig2
Figure 2(b) A Shack-Hartmann Sensor when the rays are parallel but strike the lenslets at an angle. The corresponding wavefront is shown as a blurred line. Here, all the dots have been shifted upwards compared to Figure 1(a). The shift of the dots tells us what angle the rays are hitting each lenslet.
fig2
Figure 2(c) A Shack-Hartmann Sensor when the rays are diverging. The corresponding circular wavefront is shown as a blurred arc. Note that the rays are always perpendicular to the wavefront where they are drawn.

In this case, the angle the rays hit the lenslets varies across the lenslet array. Each individual lenslet, because it is so small, sees roughly parallel rays and converges them to a point; the position of the (roughly) focused point depends on the average tilt of the rays hitting the lenslet.

In this case, all the focussed points spread away from the centre of the lenslet array. This pattern of dots thus indicates divergent light has hit the lenslet array.

fig2
Figure 2(d) A Shack-Hartmann Sensor when the rays are converging. The corresponding wavefront is shown as a blurred arc. This is the opposite of the diverging case. Here all the focussed dots are shifted towards the centre.
fig2
Figure 1(e) Two diagrams of Shack-Hartmann Sensor outputs when the light has different amounts of convergence or divergence along different axes.

On the left, the dots are centred vertically in the lenslets, but moved towards the centre horizontally. This suggests that the rays of light hitting the lenslet array are parallel in the vertical axis (because the dots are centred vertically) but convergent along the horizontal axis.

On the right, the rays are moved towards the centre vertically, but moved away from the centre horizontally. This suggests that the rays of light hitting the lenslet array are converging along the vertical axis, but diverging along the horizontal axis.

Figure 2(c) shows a circular wavefront hitting the lenslet array (equivalent to diverging light). The waves at the centre of the lenslet array will be hitting the lenslets flat on, and so will form little images directly behind the lenslets. The waves at the edge will be hitting the lenslets at an angle, and will form images displaced outwards from the lenslet centres. So the divergent waves will create an array of dot images spaced further apart than the flat waves. A convergent wave will do the opposite: it will create an array of dots which are spaced closer together than a flat wave ( Figure 2(d)). Figure 2(e) shows patterns for astigmatic wavefronts.

Finally, if an irregular wavefront hits the lenslet array, there will be no consistent pattern to the image dots, but the shift of each dot under each lenslet element will tell you the angle that the wave hit that lenslet. From that information, you can work back to figure out the exact shape of the wavefront when it hit the lenslet array.

Wavefront Correction.

A Shack-Hartmann sensor measures the shape of a wavefront, but how to we change the shape of it? For example, if we want a flat wavefront (i.e. perfectly parallel light) but our wavefront isn't flat, how do we make it flat? In adaptive optics, wavefronts are reshaped using a "deformable mirror." A deformable mirror is a mirror made of a flexible substance (plastic, say, or metal) that can be precisely reshaped to change the shape of a wavefront reflected off it. The mirror shape is controlled by tiny piezoelectric pistons, which change their size precisely in response to an electric current.

Although a deformable mirror is designed to reshape a wavefront, it is a lot easier to explain how it works by thinking about the light as individual photons.

The idea behind a deformable mirror is that if we make a mirror whose shape matches the wavefront, the reflected wavefront will be flat. This is shown in Figure 3. In this animation (which you can tap or click to stop/start) a wavefront (represented here by a set of photon particles) enters from the left. Each wavefront is different, but we suppose that a Shack Hartmann sensor has already measured the wavefront shape. The deformable mirror on the right is in fact a set of facets that can move independently. (In reality, deformable mirrors are a continuous sheet.) The mirror reshapes itself so that the photons are in a line when they all reflect, which represents a flat wavefront.

Figure 3 A deformable mirror.

How is this done? It's easiest to see if we just have two photons as in Figure 4 on the left. The top photon is a distance \( d \) ahead of the bottom photon. They are heading towards two flat mirrors which are a distance \( d/2 \) apart. On the right hand side, the top photon has now drawn level with the bottom mirror, so it is a distance \( d/2 \) from the top mirror.

At this point, the bottom photon must travel \( d \) until it hits the mirror and is reflected. The top photon must travel \( d/2 \) to the mirror, at which point it travels \( d/2 \) back, to draw level with the bottom photon. Thus, to reflect two photons to come out in parallel, the distance between the mirror facets must be half the distance between the photons. With multiple photons and mirrors, we just repeat this process. We get the distances between photons, or rather, parts of the wavefront, from the Shack-Hartmann sensor, and then adjust the distances between the mirror facets to be half that.

fig4
Figure 4 How the shape of the deformable mirror is worked out.

Applications.

"Super" Vision.

When adaptive optics was applied to human vision, one early application was thought to be correcting all the optical flaws in the eye, either by a contact lens or by laser corneal reshaping, to give perfect vision far beyond what could be achieved by simple sphere-cylinder spectacles or contacts. However, there turns out to be a few disadvantages. Reshaping the cornea by laser ablation turns out to be a little less accurate than required, and has issues with clouding and glare. Another problem is the finding that the human brain has adapted to vision with all the optical flaws built in, and vision is sometimes worse when they are removed.

Fundus Photography.

Adaptive optics has instead become more useful for imaging the inside of the eye; that is, as a way of making ophthalmoscopy (usually with fundus cameras) better. The usual setup for an AO-enhanced imager is shown in Figure 5. A guide light (usually a laser) shines onto the retina, and the laser light is reflected off the retina and leaves the eye. The wavefront of this light is reflected off a deformable mirror and then passes to a camera. Some of the light from the deformable mirror is diverted into a Shack-Hartmann sensor.

The Shack-Hartmann sensor measures the wavefront shape, and then a computer calculates what shape the deformable mirror has to be to make the wavefront flat. The mirror adjusts its shape accordingly.

fig4
Figure 5 Using Adaptive optics to photograph the retina. Light reflected off the retina (1) leaves the eye with a distorted wavefront (2). It hits a deformable mirror (3) which corrects the wavefront to a flat shape (4). The wavefront hits a semitransparent mirror (5). Half continues on to a conventional camera (6), which forms an image. The other half is reflected towards a Shack-Hartmann sensor (7). If the wavefront that arrives at the sensor is not flat, the sensor sends a message (8) to the deformable mirror to adjust itself.

Autorefractors.

The problem here is the same as before - the reflected light could come from anywhere in the retina, not just the photoreceptor layer, and so it is very difficult to figure out even the amount of defocus.