An optical system, like the eye, has many flaws (aberrations) that prevent it from forming a perfectly sharp image. In previous chapters we've looked at how to measure and correct the main flaws that can occur, namely myopia, hyperopia, and astigmatism; that is, problems that lead to defocus of the image. However, even when these are corrected, there remain other, harder to describe, flaws in the way the eye tries to focus light, some of which were mentioned in the blur chapter (Chapter 6). These remaining flaws are less important than the main ones, but they limit our ability to see, and they also limit our ability to examine the interior of the eye in detail.
If we could measure and correct all the optical flaws of the eye, we could have "super-vision" where our eyes are optically perfect, and the only blur remaining is due to diffraction. More useful, however, is that by correcting - even temporarily - all the optical flaws, we can see the retina in even more detail than possible using ordinary optical techniques.
Adaptive optics is a technology that lets us do that. Adaptive optics was originally invented in astronomy, as a way of correcting the optical flaws of the atmosphere - that is, the random turbulence that makes the starts twinkle. However, as the costs of the technology came down, it found uses elsewhere. While "super-vision" is still some way off, and perhaps not even all that desirable, adaptive optics lets us examine the interior of the eye with a detail and clarity that would be impossible with any other method.
A wavefront is what we called wave crests in Chapter 19. A wavefront is a line or curve which connects together all points on a wave that are either crests, troughs, or at exactly the same place between a crest and a trough (e.g. halfway between the crest and trough). The shape of the wavefront at one moment is all we need to know about the wave, because we can use the Huygens-Fresnel principle to work out the wavefront shape at successive moments after that.
If we want to create a sharp image, we must create a wavefront which is perfectly circular, as shown in Figure 1(a). Here a perfectly circular wavefront converges, not to a point, because that is impossible in wave optics, but to a tiny region. However, even tiny deviations from perfect circularity cause a failure of proper convergence, resulting in blur, as shown in Figure 1(b)
Spectacles can change the average radius of the concentric circles in the eye (by changing the overall convergence or divergence of the light), but they can't correct the random imperfections, seen in the wavefront in Figure 1(b), which are unique to each eye. In fact, spectacles can introduce their own imperfections into the waves passing through them.
Chapter 6 introduced some aberrations that are common in lenses and the human eye. In adaptive optics, we consider an aberration to be any difference between a wavefront and the "perfect" wavefront (either circular or flat) that we require in the given situation.
If we have an imperfect wavefront like that shown in Figure 1(b) or Figure 1(d), and we want to correct it to a perfect wavefront like that in Figure 1(a) or Figure 1(c), we must first measure the imperfections.
In adaptive optics, a wavefront can be measured using a device called a Shack-Hartmann sensor. This uses an array of tiny lenses (often called "lenslets"), arranged in rows, to measure the shape of a wavefront hitting it. The lenslets are placed one focal length from an image screen. To see how a Shack-Hartmann sensor works, we will see what sort of images are formed when various shaped wavefronts hit the sensor.
Although the Shack-Hartmann sensor is designed to measure the shape of wavefronts, it is a lot easier to explain how it works by thinking about light as rays.
Figure 2(a) shows a Shack-Hartmann sensor in action. The sensor is shown front on to the right: it consists of a set of lenslets (light blue circles) in front of an image screen (green rectangle). Lenslets are typically very small, less than \( 0.5\text{mm} \) wide. On the left, a side view of the middle column of lenslets is shown. The lenslets are one focal length away from the image screen.
Figure 2(a) shows what happens when a flat wavefront (i.e. parallel rays of light) hit the sensor. If the waves hit straight on, each lenslet will see a tiny bundle of parallel rays (a flat wave), and focus it to a point on the image screen. The image on the screen will consist of a set of dots, with each dot centred behind each lenslet.
If the flat waves hit at an angle (Figure 2(b)), the screen will still show a set of dots, but they will be shifted to one side or the other. This is really the important point here: when a bundle of rays (or a small wave) hits a lenslet at an angle, it will cause the lenslet's image dot to shift away from the centre of the screen below the lenslet.
Figure 2(c) shows a circular wavefront hitting the lenslet array (equivalent to diverging light). The waves at the centre of the lenslet array will be hitting the lenslets flat on, and so will form little images directly behind the lenslets. The waves at the edge will be hitting the lenslets at an angle, and will form images displaced outwards from the lenslet centres. So the divergent waves will create an array of dot images spaced further apart than the flat waves. A convergent wave will do the opposite: it will create an array of dots which are spaced closer together than a flat wave ( Figure 2(d)). Figure 2(e) shows patterns for astigmatic wavefronts.
Finally, if an irregular wavefront hits the lenslet array, there will be no consistent pattern to the image dots, but the shift of each dot under each lenslet element will tell you the angle that the wave hit that lenslet. From that information, you can work back to figure out the exact shape of the wavefront when it hit the lenslet array.
A Shack-Hartmann sensor measures the shape of a wavefront, but how to we change the shape of it? For example, if we want a flat wavefront (i.e. perfectly parallel light) but our wavefront isn't flat, how do we make it flat? In adaptive optics, wavefronts are reshaped using a "deformable mirror." A deformable mirror is a mirror made of a flexible substance (plastic, say, or metal) that can be precisely reshaped to change the shape of a wavefront reflected off it. The mirror shape is controlled by tiny piezoelectric pistons, which change their size precisely in response to an electric current.
Although a deformable mirror is designed to reshape a wavefront, it is a lot easier to explain how it works by thinking about the light as individual photons.
The idea behind a deformable mirror is that if we make a mirror whose shape matches the wavefront, the reflected wavefront will be flat. This is shown in Figure 3. In this animation (which you can tap or click to stop/start) a wavefront (represented here by a set of photon particles) enters from the left. Each wavefront is different, but we suppose that a Shack Hartmann sensor has already measured the wavefront shape. The deformable mirror on the right is in fact a set of facets that can move independently. (In reality, deformable mirrors are a continuous sheet.) The mirror reshapes itself so that the photons are in a line when they all reflect, which represents a flat wavefront.
How is this done? It's easiest to see if we just have two photons as in Figure 4 on the left. The top photon is a distance \( d \) ahead of the bottom photon. They are heading towards two flat mirrors which are a distance \( d/2 \) apart. On the right hand side, the top photon has now drawn level with the bottom mirror, so it is a distance \( d/2 \) from the top mirror.
At this point, the bottom photon must travel \( d \) until it hits the mirror and is reflected. The top photon must travel \( d/2 \) to the mirror, at which point it travels \( d/2 \) back, to draw level with the bottom photon. Thus, to reflect two photons to come out in parallel, the distance between the mirror facets must be half the distance between the photons. With multiple photons and mirrors, we just repeat this process. We get the distances between photons, or rather, parts of the wavefront, from the Shack-Hartmann sensor, and then adjust the distances between the mirror facets to be half that.
When adaptive optics was applied to human vision, one early application was thought to be correcting all the optical flaws in the eye, either by a contact lens or by laser corneal reshaping, to give perfect vision far beyond what could be achieved by simple sphere-cylinder spectacles or contacts. However, there turns out to be a few disadvantages. Reshaping the cornea by laser ablation turns out to be a little less accurate than required, and has issues with clouding and glare. Another problem is the finding that the human brain has adapted to vision with all the optical flaws built in, and vision is sometimes worse when they are removed.
Adaptive optics has instead become more useful for imaging the inside of the eye; that is, as a way of making ophthalmoscopy (usually with fundus cameras) better. The usual setup for an AO-enhanced imager is shown in Figure 5. A guide light (usually a laser) shines onto the retina, and the laser light is reflected off the retina and leaves the eye. The wavefront of this light is reflected off a deformable mirror and then passes to a camera. Some of the light from the deformable mirror is diverted into a Shack-Hartmann sensor.
The Shack-Hartmann sensor measures the wavefront shape, and then a computer calculates what shape the deformable mirror has to be to make the wavefront flat. The mirror adjusts its shape accordingly.
The problem here is the same as before - the reflected light could come from anywhere in the retina, not just the photoreceptor layer, and so it is very difficult to figure out even the amount of defocus.