| Title | Increasing estimation precision in localization microscopy |
| Publication Type | dissertation |
| School or College | College of Science |
| Department | Physics & Astronomy |
| Author | Ebeling, Carl G. |
| Date | 2015-05 |
| Description | This dissertation studies detection-based methods to increase the estimation precision of single point-source emitters in the field of localization microscopy. Localization microscopy is a novel method allowing for the localization of optical point-source emitters below the Abbe diffraction limit of optical microscopy. This is accomplished by optically controlling the active, or bright, state of individual molecules within a sample. The use of time-multiplexing of the active state allows for the temporal and spatial isolation of single point-source emitters. Isolating individual sources within a sample allows for statistical analysis on their emission point-spread function profile, and the spatial coordinates of the point-source may be discerned below the optical response of the microscope system. Localization microscopy enables the identification of individual point-source emitter locations approximately an order of magnitude below standard, diffraction-limited optical techniques. The precision of localization microscopy methods is limited by the statistical uncertainty in which the location of these sources may be estimated. By utilizing a detection- based interferometer, an interference pattern may be super-imposed over the emission signal. Theoretical analysis and Monte-Carlo simulations by means of Fisher information theory demonstrate that the incorporation of a modulation structure over the emission signal allow for a more precise estimation when compared to conventional localization methods for the same number of recorded photons. These theoretical calculation and simulations are demonstrated through the use of two proof-of-concept experiments utilizing a modified Mach-Zehnder interferometer. The first methodology improves the localization precision of a single nanoparticle over the theoretical limit for an Airy-disk point-spread function by using self-interference to spatially modulate the recorded point-spread function. Experimental analysis demonstrates an improvement factor of ~3 to 5 over conventional localization methods. A related method employs the phase induced onto the Fourier domain signal due to path length differences in the Mach-Zehnder interferometer to improve localization precision. The localization capability of a modified Fourier domain signal generated by self-interference is utilized to yield a two-fold improvement in the localization precision for a given number of photons compared to a standard Gaussian intensity distribution of the corresponding point-spread function. |
| Type | Text |
| Publisher | University of Utah |
| Subject | Estimation precision; Interference; Localization microscopy; Super resolution |
| Dissertation Institution | University of Utah |
| Dissertation Name | Doctor of Philosophy |
| Language | eng |
| Rights Management | Copyright © Carl G. Ebeling 2015 |
| Format | application/pdf |
| Format Medium | application/pdf |
| Format Extent | 3,544,081 Bytes |
| Identifier | etd3/id/3746 |
| ARK | ark:/87278/s6z63xb2 |
| Setname | ir_etd |
| ID | 197297 |
| OCR Text | Show INCREASING ESTIMATION PRECISION IN LOCALIZATION MICROSCOPY by Carl G. Ebeling A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Physics Department of Physics and Astronomy The University of Utah May 2015 Copyright © Carl G. Ebeling 2015 All Rights Reserved The U n iv e rsity of U tah G rad u ate School STATEMENT OF DISSERTATION APPROVAL The dissertation of Carl G. Ebeling has been approved by the following supervisory committee members: Jordan M. Gerton Chair 12/05/2014 Date Approved Erik M. Jorgensen Member 12/05/2014 Date Approved Saveez Saffarian Member 12/05/2014 Date Approved Stephan Lebohec Member 12/05/2014 Date Approved Paolo Gondolo Member 12/05/2014 Date Approved and by _________________ Carleton DeTar_________________ , Chair/Dean of the Department/College/School o f _____________ Physics and Astronomy__________ and by David B. Kieda, Dean of The Graduate School. ABSTRACT This dissertation studies detection-based methods to increase the estimation precision of single point-source emitters in the field of localization microscopy. Localization microscopy is a novel method allowing for the localization of optical point-source emitters below the Abbe diffraction limit of optical microscopy. This is accomplished by optically controlling the active, or bright, state of individual molecules within a sample. The use of time-multiplexing of the active state allows for the temporal and spatial isolation of single point-source emitters. Isolating individual sources within a sample allows for statistical analysis on their emission point-spread function profile, and the spatial coordinates of the point-source may be discerned below the optical response of the microscope system. Localization microscopy enables the identification of individual point-source emitter locations approximately an order of magnitude below standard, diffraction-limited optical techniques. The precision of localization microscopy methods is limited by the statistical uncertainty in which the location of these sources may be estimated. By utilizing a detection-based interferometer, an interference pattern may be super-imposed over the emission signal. Theoretical analysis and Monte-Carlo simulations by means of Fisher information theory demonstrate that the incorporation of a modulation structure over the emission signal allow for a more precise estimation when compared to conventional localization methods for the same number of recorded photons. These theoretical calculation and simulations are demonstrated through the use of two proof-of-concept experiments utilizing a modified Mach-Zehnder interferometer. The first methodology improves the localization precision of a single nanoparticle over the theoretical limit for an Airy-disk point-spread function by using self-interference to spatially modulate the recorded point-spread function. Experimental analysis demonstrates an improvement factor of « 3 to 5 over conventional localization methods. A related method employs the phase induced onto the Fourier domain signal due to path length differences in the Mach-Zehnder interferometer to improve localization precision. The localization capability of a modified Fourier domain signal generated by selfinterference is utilized to yield a two-fold improvement in the localization precision for a given number of photons compared to a standard Gaussian intensity distribution of the corresponding point-spread function. iv To my wife Megan. To the next journey! CONTENTS ABSTRACT.............................................................................................................................. iii LIST OF FIGURES................................................................................................................... ix ACKNOWLEDGMENTS.......................................................................................................... xiii CHAPTERS 1.....INTRODUCTION TO OPTICAL MICROSCOPY....................................................... 1 1.1 Motivation ................................................................................................................. 2 1.2 Types of Microscopy................................................................................................ 3 1.2.1 Optical Microscopy......................................................................................... 4 1.2.2 Fluorescence Microscopy............................................................................... 4 1.3 Microscopy, Specificity, and Resolution.............................................................. 8 1.4 Summary and Outline ........................................................................................... 9 1.5 References ................................................................................................................. 13 2. THE THEORETICAL FOUNDATIONS OF OPTICAL MICROSCOPY..................... 14 2.1 The Principle of Fluorescence............................................................................... 14 2.1.1 Fluorophore Interactions with Light........................................................... 16 2.1.2 Franck-Condon Principle............................................................................... 19 2.2 Resolution, the Point-Spread Function, and the Diffraction Limit ....................................................................................... 21 2.2.1 The Diffraction Limit ..................................................................................... 22 2.2.2 The Heisenberg Uncertainty Principle....................................................... 24 2.3 The Angular Spectrum Representation ................................................................ 26 2.3.1 Propagating and Evanescent Waves........................................................... 27 2.4 The Airy Profile and Rayleigh Criterion................................................................ 29 2.5 Summary ................................................................................................................... 33 2.6 References ................................................................................................................. 33 3. CIRCUMVENTING THE DIFFRACTION BARRIER VIA OPTICAL METHODOLOGIES ..................................................................................... 35 3.1 Super Resolution Microscopy in its Many Forms............................................... 36 3.1.1 Optical Super-Resolution - Moving Beyond Abbe's Limit....................... 38 3.1.2 Structured Illumination ................................................................................. 38 3.1.3 STED Microscopy........................................................................................... 39 3.2 Localization Microscopy ....................................................................................... 43 3.2.1 Information Extraction from the Point-Spread Function....................... 44 3.2.2 Isolating Single Fluorophores ...................................................................... 49 3.2.3 The Methodology of Localization Microscopy.......................................... 52 3.2.4 Biological Examples of Localization Microscopy...................................... ..55 3.3 Localization versus Resolution............................................................................ ..58 3.4 Concluding Remarks..................................................................................................62 3.5 References...................................................................................................................63 4. MODIFICATION OF THE POINT-SPREAD FUNCTION THROUGH SELF-INTERFERENCE............................................................................ 67 4.1 Theoretical Concept................................................................................................ 68 4.2 Localization Ability and the Fisher Information Matrix.................................. 69 4.2.1 Derivation of the Fisher Information Matrix............................................. 70 4.3 Monte-Carlo Simulations....................................................................................... 71 4.3.1 One-Dimensional Monte-Carlo Simulations ...............................................72 4.3.2 Effect of Rotating the Interference Fringes by 4 5 ° .................................... ..75 4.3.3 Monte-Carlo Simulations of Target Rings................................................... ..76 4.4 Experimental Setup ..................................................................................................77 4.5 Experimental Results ............................................................................................. ..80 4.5.1 Particle Tracking and Wide-field Imaging................................................... 85 4.6 Conclusions .............................................................................................................. 86 4.7 References ................................................................................................................. 88 5. INCREASED LOCALIZATION PRECISION BY INTERFERENCE FRINGE ANALYSIS........................................................................................................................ 90 5.1 Motivation ................................................................................................................. 90 5.2 Theory of Fourier Imaging..................................................................................... 91 5.2.1 Use of Transmission Gratings in Fourier Imaging.................................... 92 5.3 Experimental Setup ................................................................................................ 96 5.3.1 Experimental Results ..................................................................................... 96 5.3.2 Comparison to Gaussian Localization....................................................... 101 5.4 Localization Precision from Interferometric Fourier Image .......................................................................................................... 103 5.4.1 Monte-Carlo Simulations for FILM .............................................................. 104 5.5 Localizing Single Particles ..................................................................................... 106 5.6 Fourier Imaging Localization Conclusions......................................................... 108 5.7 References ................................................................................................................. 109 6. OUTLOOK AND FUTURE DIRECTIONS....................................................................111 6.1 Improving the Transmission Grating Interferometer........................................ 112 6.1.1 Broadband Diffraction Grating System....................................................... 113 6.1.2 Two-Dimensional Grating System................................................................114 6.2 Correlation Fluorescence and Electron Microscopy........................................ 115 6.2.1 Methodology of Correlation Microscopy...................................................117 6.2.2 Fiducial Markers and Error Registration..................................................... 118 6.2.3 Synaptic Function Studies............................................................................ 125 6.3 Summary and Final Conclusions.......................................................................... 128 6.3.1 Advantages of Differing Super-Resolution Modalities..............................129 6.3.2 Three-Dimensional Super-Resolution....................................................... 130 6.3.3 Further Developments in Optical Microscopy.......................................... 130 6.3.4 Final Thoughts................................................................................................131 vii 6.4 References.................................................................................................................132 APPENDICES A. MATHEMATICAL DESCRIPTION OF THE POINT-SPREAD FUNCTION ...........134 B. FISHER INFORMATION THEORY...............................................................................142 C. MICROSCOPE DESIGN AND LAYOUT...................................................................... 150 D. DESIGN OF BINARY PHASE GRATING...................................................................... 166 E. INCREASED LOCALIZATION PRECISION BY INTERFERENCE FRINGE ANALYSIS SUPPLEMENTAL......................................................................................... 170 viii LIST OF FIGURES 1.1 Example of an electron microscope image 5 1.2 Basic illustration of the design of a fluorescence microscope 7 1.3 Composite image showing one of the most utilized biological model organisms, the nematode worm Caenorhabditis e leg a n s 8 1.4 A survey of various microscopy techniques 10 1.5 Figure representing an overlay between an optical image of individual proteins (colored features) with the structure of the cell as seen in the electron microscopy image underneath................................................................................. 11 2.1 Three-color fluorescence microscopy image of a c e l l 15 2.2 Chemical structure of two of the most common fluorophores used in fluorescence microscopy 17 2.3 Jablonski diagram illustrating the principle of fluorescence, and the allowed transition states between electronic (black) and vibrational states (dashed blue) ............................................................................................................................. 18 2.4 Optical transitions of a fluorescent molecule according to the Franck-Condon principle 20 2.5 Resolution target illustration 23 2.6 Figure illustrating the parameters of a focal spo t 24 2.7 Plane wave representation of the angular spectrum 28 2.8 The optical configuration used in the calculation of the point-spread function 31 2.9 Illustration of an Airy profile, representing the image of a point-source in a diffraction-limited imaging system 32 3.1 Concept of structured illumination 39 3.2 Concept of STED microscopy 40 3.3 Numerical simulations illustrating the excitation, STED and effective PSFs in STED microscopy .................................................................................................. 42 3.4 Various profiles along the x -axis of the PSFs in the case of STED microscopy for differing values of the saturation intensity, I0 ................................................. 44 3.5 Illustration of an ideal and pixelated point-spread function (PSF) 45 3.6 3D Surface illustration of a pixelated PSF, as would be recorded on a camera. 46 3.7 Schematic of the concept of localizing on a single point source 48 3.8 Cartoon schematic illustrating the concept of the diffraction-limit of a sparsely distributed sample 50 3.9 Chemical structure of photo-activatable green fluorescent protein (PA-GFP) 51 3.10 Optically shelving an organic dye 53 3.11 Separating fluorophore active states in time 54 3.12 Concept of localization microscopy 56 3.13 Simulation showing fluorophores distributed on a spiral shape 57 3.14 Alexa647 labeled microtubules from BSC-1 African green monkey kidney epithelial cells 59 3.15 Resolution as a function of labeling density 61 3.16 Reconstruction of imaging data from Alexa647 labeled microtubules 62 4.1 Concept of point-spread function self-interference (PSI) 70 4.2 PSI Monte-Carlo simulations for localization precision versus signal photons 73 4.3 Modified PSFs for Monte-Carlo simulations 74 4.4 PSI Monte-Carlo simulations for localization precision versus background photons 75 4.5 PSI Monte-Carlo simulations for localization precision versus fidelity 76 4.6 Monte-Carlo simulations for rotated 1D transmission gratings 77 4.7 Monte-Carlo simulation of localization microscopy with a Gaussian PSF and PSI, the PSF self-interference 78 4.8 Schematic of experimental setup for modified PSF detection, with dual detection paths controlled by a flip mirror 79 4.9 Gold nanoparticle sample preparation 81 4.10 Experimentally recorded P S I 82 4.11 Localization results of a single stationary nanoparticle 83 4.12 Histogram of positions of Au nanoparticle in y (a) and x (b) after drift correction 84 4.13 Distributions of localization results of single Au nanoparticle 85 4.14 Particle tracking and wide-field image 87 5.1 Cartoon illustrating how a lens performs a Fourier transform on a signal. . . . 92 5.2 Schematic of the interferometric-based FILM 93 5.3 Schematic of the grating system with one transmission grating 94 5.4 Experimental setup of the FILM system 97 5.5 PSF versus Fourier plane image of Au nanoparticle 98 5.6 Phase values as a function of frame number 99 x 5.7 Phase values as a function of scan position ......................................................... 100 5.8 Phase values as a function of stage position for 20 nm step size (red) and calibration curve obtained by linear fit (black)..................................................... 101 5.9 Fringe frequency values obtained by scanning the sample along the x -axis .. 102 5.10 FILM versus Gaussian localization for signal photons........................................ 104 5.11 FILM versus Gaussian localization for background photons ............................105 5.12 Localization of three nanoparticles (yellow spheres, not drawn to scale), at positions x0, 2L + x1, 2L + x2, where L is the scanning pixel (step) size, using conventional Gaussian method (gray) of the nanoparticles from the APD image, and FILM (red)................................................................................................107 6.1 Two methods to improve the grating system ....................................................... 114 6.2 Two examples of interference observed over the PSF within the interferometer configured in imaging mode............................................................................ 115 6.3 Images of the diffraction-limited fluorescence image from a cross-section of a C. elegans nematode........................................................................................... 120 6.4 Error in the optical and electron micrograph image registration (in nanometers) between the optical and electron micrograph registration for each fiducial marker ...................................................................................................................121 6.5 Correlation optical and electron microscopy image at the diffraction limit .. 122 6.6 Correlation optical and electron microscopy image using localization microscopy ....................................................................................................................... 123 6.7 Composite image of three high magnification electron micrograph images .. 124 6.8 Electron micrographs of a neuromuscular junction in a C. elegans nematode 126 6.9 Localization images of the structure of a synapse in C. e leg an s ....................... 127 6.10 Updated chart of microscopy methods and their resolving power plotted as a function of chemical specificity .......................................................................... 129 A.1 Schematic of optical geometry through a focusing lens .................................... 136 A.2 Geometrical representation and definition of coordinates for an aplanaptic lens system...................................................................................................................137 C.1 Layout of the laser launch, collimation optics, and mechanical shutter........ 151 C.2 AOTF and coupling optics......................................................................................... 153 C.3 Initial portion of the excitation path, indicated by the cyan beam................... 156 C.4 Side view of the layout of the 4 f scanning system...............................................158 C.5 Schematic of a 4 f system......................................................................................... 159 C.6 Full detection path of microscope system, shown with the emission light leaving the objective and going through the 4 f system...................................... 160 C.7 Detection path through the interferometer......................................................... 162 xi C.8 Custom laser and piezo control software ..............................................................163 C.9 Image of the custom software used to control the Prior xy stage..................... 164 C.10 Image of the custom software used to control the scanning mirror and create an image ................................................................................................................. 165 D.1 The grating system showing an input signal split into the +1 and -1 orders, and recombining at the detection plane................................................................167 D.2 Sketch of the grating system.....................................................................................168 E.1 Wavefront modification through the interferometer.......................................... 171 E.2 Plot of extracted phase values from numerical fitting (top) and Fourier analysis (bottom) .............................................................................................................. 173 xii ACKNOWLEDGMENTS This work owes many people a word of gratitude. I have had what some may consider an unconventional graduate school career, and my collection of academic mentors have all had a hand in making this dissertation a reality. Firstly, I must thank Dr. Jordan Ger-ton for giving me the chance years ago of starting a project in his lab that has morphed into my current project. I had no idea during our initial discussions where things would lead, but I can honestly say that if not for Jordan providing the initial opportunity and years of support, I would not be in the place I am today. I would like to say thank you for your continuously positive and optimistic demeanor, and providing a welcoming lab to work in. Secondly, I would like to thank Dr. Rajesh Menon of the Department of Electrical and Computer Engineering, whom I have worked in collaboration with over the bulk of my graduate career. Rajesh's relentless push forward is a welcome motivation, and helped me get through those long days in the lab. His discussions, insight, and support were always welcome, and always appreciated. And the last of my academic advisors, Dr. Erik Jorgensen of the Department of Biology, deserves a special thank you. It is not only for financial support over the past four years of my career, but for his willingness to take a chance on my skills being useful for him and his lab. Erik carries his passion for his science, and for life in general, on his sleeve more so than perhaps any person I have ever met, and to say that it is not infectious would be a lie. Erik's lab is a joy to work in, and that joy certainly starts at the top. I must also thank the various members of the three labs that I straddle. First, within Jordan's lab, I must thank Dr. Ben Mangum and Dr. Eyal Shafran, who both were patient enough with me when I first began in the lab, and put me on a solid foundation. I must also thank Dr. Analia Dall'asen, Dr. Anil Ghimire, Ben Martin, Lauren Simonsen, Yuchen Young, Jessica Johnston, and Charles McGuire, who were always around for discussions, help, and the refinement of ideas and welcoming critique. You made the lab a fun and enjoyable place to work. And lastly from Jordan's lab, I must thank Jason Martineau, who spent both a class as an undergraduate with me as a teacher's assistant, and fortunately for everyone, came back years later to take over my project. I thank you for your hours sitting in the dark taking the data for our papers, always with a cheerful and upbeat manner. Within the Menon lab, I must thank Peng Wang, for his hard work on a couple of projects we were involved in together; to say that I learned a lot from you is an understatement. And most importantly, I need to thank Dr. Amihai Meiri, who joined the lab at the perfect time to help me push through the end of this project, and to help me immensely with his simulations and complicated data analysis. A large portion of this work deserves as much credit to Amihai as to myself. It has been great to work with you, even if we had to be confined to that perpetually dark room together. Within the Jorgensen lab, I must give a blanket acknowledgment to the entire lab - Dr.'s Wayne Davis, Manasa Gudheti, Patrick Allaire, Eric Bend, Cathy Dy, Christian Frokj^r-Jensen, Hsiao-Fen Han, Rob Hobson, Gunther Hollopeter, Randi Rawson, Matt Schwartz, ShigekiWatanabe, and students Eddie Hujber, MattLabella, Patrick Mac, Sean Merrill, and Leo Parra. You all made the Jorgensen lab one of a kind, and as a physics outsider, welcomed me in with open arms. Lastly, I must thank Becky McKean for keeping the Jorgensen lab running smoothly, and dealing with the myriad of orders and requests I placed through her. Finally, I must thank all the friends and family whose support made this work possible. To my parents, who instilled a love of learning at an early age, and who gave me the foundation to make the academic achievements of my life possible. To my friends within the Department and across campus, for making the Univeristy of Utah a great place to be a part of. To my office buddy Alex, for dealing with the incessant click of the keyboard, and giving me lots of figure advice. To my amazing mother and father-in-law, who made life in Salt Lake a joy, and showed me a much larger world. And finally, to my amazing wife Megan, who put up with my long days in the lab without complaint, and saved my sanity by taking me on adventures around the world. May they never stop. xiv CHAPTER 1 INTRODUCTION TO OPTICAL MICROSCOPY Perhaps unconventionally, this dissertation will take a longer introductory form than commonly employed. My graduate career existed in an intersection between three different labs, one each from physics, biology, and electrical and computer engineering. The overall goal of my project was to further expand upon the work in the field of localization microscopy, and I will attempt to explain this field in the broader context of microscopy as a whole. However, microscopy is a multidiscipline endeavor. The instrument itself is designed and operated under the laws of optics and physics, and in today's current forms, relies heavily on computational control and analysis. The interaction of the sample, namely the use of markers within the sample tagged to a target, involve photo-physics and quantum mechanics in the understanding of their behaviors. The fields are very physics and optics intensive. Moreover, the main goal of microscopy is focused upon investigating the world of the very small, and perhaps in its most visual form, in the world of biological research and investigation. While microscopy has branched out to other fields of science, its early days primarily dealt with the biological world, and led to the discovery of the cellular theory of biology, single-celled organisms, and subcellular components. Overall technological innovations have allowed for microscopes to become more complex and an even more integral part of biological research as they become integrated into such fields as diagnostic research, studies on cellular dynamics and function, and along with the use of fluorescence markers to serve as beacons, determining the spatial distribution of proteins in the cellular environment. Much of the first three chapters deals with the background required to allow the reader to see the research presented in the latter chapters in its broader context. This is intended not to take away from the content of the latter chapters, but rather to present 2 it in its proper contextual framework. Specifically, this dissertation will discuss in detail my work investigating the concept of localization microscopy, a new form of optical microscopy that offers the ability to probe the location of individual fluorescent emitters in spatial detail below the conventional diffraction limit. This is a relatively new form of optical microscopy, and even in the short amount of time that is has been a part of the field, it has helped usher in a new era of research interest and development in the field of optical microscopy. These advancements have in fact spurred a large interest in optical microscopy techniques, which is in turn having a large impact mainly within the biological community. This chapter will provide an introduction to the field of optical microscopy, its context within the larger field of imaging and its various modalities, and both its strengths and critical limitations. 1.1 Motivation Optical microscopy in its various iterations has been around since the days of Galileo, when he fashioned an occhiolino, or compound microscope with simple convex and concave lenses [1]. The ability to magnify an object may be achieved with only a single lens, a property known for thousands of years. The word microscope stems from the Greek, meaning "small" (micro) and "to look" or "see" (scope), as the primary purpose of such an instrument is to allow for the visualization of objects or details that are too small for the human eye to see unaided. To a large extent, microscopy is a tool for the world of biology. While certainly useful in a host of other scientific disciplines, such as material science, engineering, and geology to name a few, the biological sciences heavily rely on microscopy methods to gain contextual and quantitative information regarding the organization and construction of biological systems. Antonie van Leeuwenhoek used his self-ground lenses in the late 1600's to construct a simple microscope and discovered bacteria, starting the field of microbiology [2]. In 1838, Matthias Schleiden and Theodor Schwann, using newly developed optical microscopes, were able to resolve individual cells for the first time, formulating the theory of cell biology [3]. Today, microscopy in its various iterations is a fundamental tool of biology, allowing researchers to investigate the fundamental components that make up biological systems. The scale of investigations runs from whole or partial examinations of plants or 3 animals, individual single cellular organisms, subcellular organelles, and finally, down to the individual components of the cell. The range of scales is vast as well. Single cells are « 10-20 iim in diameter, while individual organelles inside of a cell can be anywhere from tens of nanometers in size to a few microns. The fundamental building blocks of the cell, such as proteins, and the genetic information carriers, such as DNA and RNA, are macromolecular complexes that can be a few nanometers in size or smaller. Perhaps the most important of these are the proteins of a cell. Proteins constitute the majority of a cell's mass, and are responsible for such functions in the cell as catalyzing metabolic reactions, replicating DNA, transporting molecules from one part of the cell to another, responding to stimuli, perform structural functions, cell-to-cell signaling, immune responses, and cell replication, to name a few. Due to their myriad number of roles, proteins also are a challenge to study. They are ubiquitous in the cell, and the ability to isolate and investigate single types of proteins is an extremely powerful tool in helping to understanding a particular protein's functionality within the cellular environment. The scientific pursuit of the study of cellular systems has led to the development of numerous types of microscopy. 1 .2 Types o f Microscopy In general, the term "microscope" is usually associated with light, since these were first to be developed and remain the most common. The simplest optical microscopes, compound microscopes, allow for the user to place the object under a series of lenses, and the image of the object is magnified. Numerous, more complicated, optical modalities have been developed to allow for the discrimination of internal structures within biological samples. For further reading on the various types of optical methods in microscopy, the reader is referred to reference [4]. Furthermore, there are classes of microscopes that avoid the use of light altogether. For example, large amounts of imaging done within the biological sciences are performed by electron microscopy, where beams of electrons are used to image a sample, and avoid the use of light altogether [5]. In scanning electron microscopy, a focused beam of electrons is scanned over a sample, and the electrons that scatter off of the sample are recorded, building up the image pixel by pixel. In transmission electron microscopy, electrons are transmitted through 4 an ultrathin sample, and the electrons are focused onto an imaging device to generate an image. The electron microscope has proven to be a hugely powerful diagnostic tool, and is capable of generating images with extremely high levels of detail regarding the cellular structure, as shown in Figure 1.1, with resolution down to a few nanometers. Finally, there are scanning probe microscopes [6], which measure an "image" by scanning a probe, on the order of tens of nanometers, over the sample. Examples of these include atomic force microscopy (AFM) [7] and near-field scanning optical microscopy (NSOM) [8]. 1.2.1 Optical Microscopy Optical microscopy, as described above, was instrumental in developing the modern theoretical and experimental framework of biology. However, cells are colorless and transparent, making it impossible to differentiate between the various components of a cell. Cells, by weight, are close to 70% water, and there is little in the cell that can naturally absorb large amounts of visible light. Without further techniques to distinguish various structures of the cell, optical microscopy lacks the ability to provide any sort of contrast that would enable distinct features to be resolved. Like electron microscopy, optical microscopy has benefitted as much from various techniques to stain and introduce contrast to the sample as much as it has from improvements in the instruments themselves. A general method for creating contrast within a cell is by the introducing an organic dye to the cellular environment, which will have a natural affinity for a particular sub-cellular feature. For instance, the dye hematoxylin is attracted to negatively charged molecules, and will bind to DNA and RNA, revealing the location of these molecules throughout the cellular environment [3]. If a particular dye has a natural affinity for a single cellular component, then the distribution of the target component can be visualized easily, since sufficient contrast exists between the target and the remainder of the cellular components. 1.2.2 Fluorescence Microscopy While organic dyes produce contrast within the cellular environment, their ability to target and bind to individual components within the cell, or their specificity, is lim- 5 Figure 1.1. Example of an electron microscope (micrograph) image, showing a high degree of structural resolution. The image shows a small portion of a cross section of the nematode worm Caenorhabditis elegans. The reader is able to see structural detail linked to internal organelles, membranes, and compartmentalization. What is not possible to discern from this image, however, is the distribution of specific proteins within the image. Scale bar, 1 ^m. Sample preparation and data collection by the Jorgensen Lab, University of Utah. ited. These techniques generally operate by shining white light onto the sample, and recording the image onto a camera. The contrast of the image is a function of the overall absorption of incident white light by the organic dye. Fluorescence microscopy offers an advantage over these methods both in the level of contrast, and the specificity of the technique. The incorporation of fluorescence molecules as the method of contrast enhancement allows for targeting of specific proteins, or the DNA and RNA [9] within the cell. The fluorescent molecules are either introduced into the cellular environment through 6 the genetic introduction of fluorescent proteins [10] or through immunofluorescence techniques [11]. These techniques are extremely chemically specific, meaning that only a certain protein or proteins will be marked with a fluorescent molecule, while the rest of the cell or sample remains the unaltered. In this way, only the proteins under investigation are labeled. Fluorescent molecules are extremely powerful markers because they will absorb light at one wavelength, and emit light at a longer wavelength (these details will be further explained in Chapter 2). This allows for the selective separation of excitation and emission light by the use of wavelength selective mirrors (called dichroics) and emission filters. An example of a generic fluorescence microscope is shown in Figure 1.2. The branch of optical microscopy that incorporates fluorescent markers is called, appropriately, fluorescence microscopy. Fluorescence microscopes are distinct from simpler conventional optical microscopes in that they have a high-power excitation source, usually either a number of high-power lasers, LEDs, or a broad-band lamp. The excitation light is directed onto the sample, where fluorescent molecules within the sample absorb this light and give off emission of a different wavelength. This emission is collected by the objective, passes through both the dichroic mirror1 and emission filters (to remove any residual excitation light), and focused onto a camera or photo-counting device. Since only the emitted photons from the fluorescent molecules reach the detector, even a small number of photons can be imaged to produce a quality image. An example of the advantages of fluorescence microscopy is shown in Figure 1.3. Part (a) of the figure shows a conventional optical microscopy image of the nematode worm Caenorhabditis elegans. The general outline of the worm is clearly visible, as well as a few internal structures. Part (b) of the figure is a fluorescent image, where only certain neurons within the worm have been labeled with the first isolated fluorescent protein, Green Fluorescent Protein (GFP). This allows for an easily visualized mapping of the neurons in question, and their distribution throughout the organism. 1Dichroic mirrors come in many variants. Some, called long-pass dichroics, are transparent to wavelengths above a certain threshold, while reflective to those below. Short-pass dichroics are the opposite. Others only pass a very narrow spectral band, while others are called multiband and are reflective and transparent to multiple regions of the visible spectrum. Interestingly, many of the best specialized optical component companies are located in Rochester, NY, where a little company called Kodak was formed in 1888. 7 Figure 1.2. Basic illustration of the design of a fluorescence microscope. The excitation light, shown in blue, is directed onto the sample by the use of a dichroic mirror (see main text) and microscope objective. The source can be a laser (or lasers), a high-powered LED, broad-band lamp. The emitted fluorescence, shown in green, is collected by the same objective. Since dichroic mirrors are wavelength specific, the emission photos will pass through the dichroic, separating the excitation from the emission. An emission filter will further suppress any remaining excitation light from the optical path, and the light is then focused onto some form of detector, such as a camera or photon counting device. While fluorescence microcopy offers a tremendous advantage in terms of chemical specificity and the ability to view only the desired target proteins or cellular component of interest, it is subject to the resolving power of optical methods. The diffraction-limit is the fundamental limit of the resolving power of an optical system, and is given as « A/2. Fluorescence microscopy operates in the visible region of the electromagnetic spectrum, meaning that the fundamental resolution of fluorescent microscopes is 200300 nm. For studies of whole organisms, such as demonstrated in Figure 1.3, where the length scale is over hundreds of microns, the resolution limit does not generally limit the information content of the image. In studies involving protein localization within the cellular environment, or protein-protein interaction studies, this poses a fundamental 8 (a) (b) L ~0\ \ \ L > Figure 1.3. Composite image showing one of the most utilized biological model organisms, the nematode worm Caenorhabditis elegans (C. elegans). Top section (a) shows a transmitted light image of the worm using differential interference contrast. While large structural features may be discerned, no further information regarding protein expressions or distributions is available. Publicly available image by Ian D. Chin-Sang, Queen's University, Kingston, ON, Canada. Bottom image (b) gives an example of fluorescence microscopy, showing the specificity of the technique. In this specimen, only particular neurons within the worm (GABA neurons2) were genetically modified to express green fluorescent protein (GFP, discussed further in Chapter 2) in the cytoplasm of the cells. As a result, the only optical signal from the worm upon excitation are cells expressing GFP, showing the neuronal network throughout the organism. Image by Dr. Randi Rawson, Jorgensen Lab, University of Utah. Scale bar, 100 ^m. obstacle. The size of individual proteins is in the 2-5 nm range, meaning that the best possible resolution of an optical system is two orders of magnitude larger than the protein being studied. 1.3 Microscopy, Specificity, and R e so lu tion Each of the variants of microscopy has its inherent strengths and weaknesses. A qualitative illustration of various methods and their relation to chemical specificity and 2GABA neurons are neurons within the worm that make and release the neurotransmitter gamma-aminobutyric acid, abbreviated GABA. In the nematode worm C. elegans, the neurotransmitter GABA primarily acts at neuromuscular junctions. 9 resolving power may be seen in Figure 1.4. Optical microscopes offer a high degree of chemical specificity, or the ability to distinguish between specific types of molecules and proteins within a biological sample, along with the ability to image live specimens. The major downside is their relatively poor resolution. Electron microscopy offers the ability to resolve detail on the nanometer scale, yet offers limited chemical specificity, and cannot be performed on live samples. Scanning probe methods achieve nearly the same resolving power as electron microscopy, yet can only probe the surface of a biological structure, and so are inadequate for studies requiring any imaging within the interior of a sample. An ideal instrument would be one that has both a high degree of chemical specificity as well as high resolution. Recall that in Figure 1.4, the further to the right an imaging modality is placed on the chart, the higher its resolving power. What would be ideal for biologists is an optical method that can combine the chemical specificity of optical microscopy methods with the high spatial resolution of electron microscopy [12]. A first step in this direction can be seen in Figure 1.5, which illustrates how it is possible to combine an optical image of a sample with the image of the same sample from an electron microscope. However, the dissimilarity between the resolving scales of the two methods is abundantly clear in this image. Each method has its own strengths and weaknesses, gives a different conceptual understanding of the sample in question, but is ultimately hampered by the drastic resolution disparity between the two methods, namely due to the diffraction limit of optical systems. 1 .4 Sum m a ry and Ou tlin e The remainder of this dissertation is outlined out as follows. Chapter 2 will give an in-depth analysis of some of the main strengths and weakness of optical microscopy. The first half of the chapter will focus one of optical microscopy's main advantages, which is the chemical specificity of the method. It will explore the fundamentals of fluorescence, how this is utilized in optical microscopy, and the mechanisms in which fluorescent molecules are joined to target proteins. The latter half of the chapter will cover the physical and mathematical derivation of the diffraction limit of optical systems in detail, and derive expression relating to the fundamental resolving power of conven- 10 Raman Spectroscopy Fluorescence Microscopy o Near-Field o <u a AFME lectron Microscopy em Bright-Field u Resolution > Figure 1.4. A survey of various microscopy techniques, plotted with respect to both chemical specificity (vertical axis) and their resolving power (horizontal axis). tional optical microscopy systems. The past 15 years of academic research, however, have seen a paradigm shift in optical microscopy, demonstrating that the resolution limit for far-field microscopy imposed by Abbe is not completely absolute.3 The development of various "super-resolution" methodologies in the field of optical microscopy have allowed for imaging beyond the conventional diffraction limit, and the Nobel Prize in Chemistry for 2014 was awarded to three of the pioneers of this field of research. Various iterations of these methods can achieve resolution in the tens of nanometers, and have allowed for a rapid expansion in the capabilities of optical instruments. One of these variants is known as localization microscopy. Localization microscopy utilizes time-multiplexing (isolating point-emitters in time) to allow for a statistical analysis on individual point-emitters within a sample. As will be explained in further detail in Chapter 3, this time-multiplexing allows for the spatial isolation of individual point-sources, which can then be localized to a high degree of precision, where the 3This statement obviously ignores the whole field of near-field optics, which has long been able to resolve features at the nanoscale. 11 Figure 1.5. Figure representing an overlay between an optical image of individual proteins (colored features) with the structure of the cell as seen in the electron microscopy image underneath. The mage is the same image as seen in Figure 1.1, only now combined with the fluorescent image of the cross-section of a C. elegans whose ryanodine receptors are tagged with the fluorescent protein tdEos. The low resolving power of the fluorescent image does not allow more than a rough estimate of the protein's location within the larger framework of the host organism. However, the EM image shows fine structural detail, yet it is impossible to discern the location of individual proteins. Scale bar, 1 ^m. Sample preparation and data collection by the Jorgensen Lab, University of Utah. Optical images recorded on a Zeiss Elyra single-molecule localization microscope operating in total internal reflection (TIRF) mode. Electron micrograph recorded on an FEI novaNano scanning electron microscope. 12 uncertainty in the location of the point-source is lower than the classical diffraction limit. The localized point-sources are then rendered as a function of the uncertainty in their location onto a single image. This technique is the most common of the superresolution modalities, and offers an approximate increase in resolution4 to almost an order of magnitude. The main focus of this dissertation is on methods to increase the precision with which the location of these individual point-sources may be estimated. Chapter 3 will discuss the field of super-resolution microscopy, localization microscopy in particular, and comprehensively study its application. Chapter 4 will describe in detail how the detected emission in localization modalities maybe modified to generate a higher localization precision through the use of self-interference of the emission. As is demonstrated in the chapter, this method allows for approximately three-to-five-fold increase in the precision of localization methods. In Chapter 5, a method of using the optical transfer function to measure a particle's position is discussed. This method, while differing slightly from conventional localization microscopy, illustrates a concept that improves the estimation precision by approximately a factor of two. Finally, the concluding chapter will discuss future avenues of research, from topics discussed in Chapter 4 to returning to the concept illustrated in Figure 1.5. This idea, called correlation microscopy, aims to combine optical super-resolution methods with electron microscopy. Unlike the diffraction-limited image shown in Figure 1.5, the use of super-resolution methods allows for a much higher degree of merging of the two methods, due to the elimination of the large disparities in the level of resolution of the two methods. Further, more detailed information is contained within the Appendices, such as the full derivation for the spatial distribution of the electric fields within a focus (Appendix A), as well as an introduction to Fisher information theory and its implications (Appendix B) for modeling a given experimental distribution. Appendices C, D and E give detailed information relating to experimental parameters contained within Chapters 4 and 5. 4Resolution in super-resolution microscopy, especially in localization microscopy, becomes a bit of a gray area. This is discussed further in Chapter 3. 13 1.5 R e fe ren c e s [1] 1.1. Smolyaninov, HFSP Journal 2, 129 (2008). [2] B. Huang, H. Babcock, andX. Zhuang, Cell 143, 1047 (2010). [3] B. Alberts et al., Molecular Biology o f the Cell, Garland Science, New York, NY, 5th edition, 2007. [4] D. E. Chandler and R. W. Roberson, Bioimaging: Current Concepts in Light and Electron Microscopy, Jones & Bartlett Publishers, Sudbury, MA, 2009. [5] R. Erni, M. D. Rossell, C. Kisielowski, and U. Dahmen, Physical Review Letters 102, 096101 (2009). [6] J. J. Greffet and R. Carminati, Progress in Surface Science 56, 133 (1997). [7] G. Binnig, C. Quate, and C. Gerber, Physical Review Letters 56, 930 (1986). [8] U. Durig, D. W. Pohl, and F. Rohner, Journal of Applied Physics 59, 3318 (1986). [9] D. Proudnikov and A. Mirzabekov, Nucleic Acids Research 24,4535 (1996). [10] M. Chalfie, Y. Tu, G. Euskirchen, W. W. Ward, and D. C. Prasher, Science 263, 802 (1994). [11] I. D. Odell and D. Cook, Journal of Investigative Dermatology 133, e4 (2013). [12] S. Watanabe et al., Nature Methods 8, 80 (2011). CHAPTER 2 THE THEORETICAL FOUNDATIONS OF OPTICAL MICROSCOPY As briefly outlined in Chapter 1, this chapter will give a detailed overview of two physical phenomena associated with optical microscopy, namely the principle of fluorescence and the diffraction limit. The principle of fluorescence allows for detailed studies of specific cellular components, most notably proteins. Due to the wide range of available fluorescent markers and their emission spectra, fluorescent microscopy has evolved into a robust and integral tool in the cellular and molecular biology research fields. The physical mechanism of this process will be discussed in the first half of this chapter. The downside to fluorescent microscopy is the limitation posed by that of the diffraction limit. The latter half of this chapter is devoted to the physical and mathematical derivation of the diffraction limit, and explores the sources of this limit. The derivation the limit is first explored through the Heisenberg uncertainty relationship of electromagnetic waves, and then further explores the limitation through the mathematical framework of the angular spectrum representation. It is through this framework that the mathematical form of a diffraction-limited image of a point-source is derived. 2.1 T he P rin c ip le o f F lu o re s c en c e Conventional optical microscopy creates an image by either passing light through a transparent sample or reflecting it off of an opaque sample. However, these methods do not allow for a high degree of chemical specificity, nor the ability to differentiate the specific molecular components of the cell. To achieve this, the principle of fluorescence is utilized. Fluorescence is the emission of light by a substance that has undergone absorption of light; in most instances the emission light will be of a different color than the incident light. An example of this technique can be seen in Figure 2.1, where three 15 Figure 2.1. Three-color fluorescence microscopy image of a cell. Each color represents a distinct protein labeled via immunofluorescence techniques [1]. Red: tubulin (protein subunit of microtubule filaments, the cellular "highways" for motor proteins and intracellular transport). Green: TOM20 (central protein component of the TOM receptor complex present in the outer membrane of mitochondria). Blue: clathrin (protein responsible for the formation of coated vesicles within the cell). Scale bar: 10 ^m. Sample preparation and data collection by the Jorgensen Lab, University of Utah. Images recorded on a Zeiss Elyra single-molecule localization microscope in epi-fluorescence mode. distinct proteins within a kidney epithelial cell from an African green monkey (Cercop-ithecus aethiops) are labeled with three distinct fluorescent markers, each with a distinct emission spectrum. Each color can be imaged separately, using the correct filters, and the resultant images combined into a single composite three-color image. Many substances exhibit fluorescence, usually systems where their molecular structure consists of ^-orbitals, or delocalized electrons. These delocalized electrons reside across many constitute atoms, usually conjugated systems of organic molecules. Systems of conjugated molecules have proven to be immensely useful because of the simple fact that delocalized electrons in the ground state of ^-conjugated systems are able to 16 respond to the electric field of visible light.1 The most common forms of these found in optical microscopy are either organic dyes or fluorescent proteins. An example of the chemical structure of two such dyes, AlexaFluor 568 and Green Fluorescent Protein (GFP) may be seen in Figure 2.2. The ring structures are shown as having alternating single and double bonds, but these are actually hybridized bonds, and the electrons are delocalized over the extent of the ring. The high chemical specificity of fluorescence when used in imaging biological samples comes from the fact that both organic dyes and fluorescent proteins can be attached to individual proteins within the cell. Organic dyes may be attached to specific proteins by the use of antibody staining techniques [1], while fluorescent proteins can be genetically inserted into the genome of the cell, and genetically encoded into the protein of interest [5]. When the sample is subjected to excitation light of the appropriate wavelengths, the fluorescent markers will absorb light and then emit light of a slightly different wavelength (color). This emission can be collected and separated spectrally from the excitation light, making fluorescence extremely sensitive even to individual molecules. It is this feature that makes optical microscopy methods so important to the biological sciences - individual protein distributions can be mapped within the cellular environment. The downside of optical methods is their relatively low resolving power. An ideal instrument for imaging of biological samples is one that is both chemically specific and offers high resolution. 2.1.1 Fluorophore Interactions with Light When a photon interacts with a ^-conjugated molecule, the photon is absorbed and an electron in the ground state (this ground state usually contains two electrons) is promoted to a higher electronic state. This promotion to a higher electronic state gives the electron a new principal quantum number, n. The interaction of the electron to the electric field of the photon is extremely fast, and occurs on the order of « 10-15 s. Upon promotion to the excited state, the system can reside in either a singlet state (Sn) or a triplet state (Tn), where the electron spin configuration is either antiparallel 1 For a comprehensive overview of the use of fluorescence in optical imaging, the reader is referred to reference [2]. 17 Figure 2.2. Chemical structure of two of the most common fluorophores used in fluorescence microscopy. (a) AlexaFluor 568, a widely used organic dye molecule [3]. (b) Chromophore of Green Fluorescent Protein (GFP), the first isolated protein that exhibited the property of fluorescence [4, 5]. or parallel, respectively. In a singlet state, the electron in the excited state is still paired with the remaining ground-state electron.2 "Paired" in this context means that the two electrons have opposite spin, per the Pauli exclusion principal [6]. In the triplet state, the two electron spins are no longer paired, and are aligned. Thus, for triplet states, transitions back down to the ground state are "forbidden," since each electron would posses the same spin value. Due to the exchange interaction between the two states, the triplet state is a lower energy state for principal quantum numbers n > 0. Thus, the most common electronic transitions are those that involve a conservation of spin configuration (such as S0 ^ Sn) [7]. AJablonski diagram is illustrated in Figure 2.3. Being molecular systems, fluorophores posses numerous vibrational sublevels at each principal electronic energy level, due to the presence of chemical bonds linking the constitute atoms. During optical transitions from one principal electronic state to another, transitions to higher vibrational levels are allowed, and are explained via the Franck-Condon principle (discussed in Section 2.1.2). Electrons that are excited to higher vibrational levels within a principal electronic state relax quickly to the ground state of the electronic state; this process occurs on the order of « 10-12 - 10-10 s. Electrons can also nonradiatively de-excite to a lower electronic state via a process called in- 2Electrons are fermions, which means that they each must have a unique quantum state. Every possible orbital of an atom or a molecule can hold two electrons, with the two electrons having opposite spin. 18 IC ^ < V W W - 1C - W W \ A A A A A A > ; T 2 Si ISC - V X A A A A A A A A A ^ - - Ti ISC -V W W ')* Absorption Fluorescence So Phosphorescence S0 Figure 2.3. Jablonski diagram illustrating the principle of fluorescence, and the allowed transition states between electronic (black) and vibrational states (dashed blue). Both radiative (straight lines) and nonradiative (wavy lines) transitions are shown. IC = Internal conversion, ISC = Intersystem crossing, S0 = Singlet ground state, Sn = Singlet excited state, Tn = Triplet excited state. Adapted from [7]. ternal conversion (IC) that is highly dependent on electron-phonon coupling [8]. In general, radiative decay to the ground electronic state occurs from the ground vibrational state of an excited electronic state, known as Kasha's rule [9], as is shown in Figure 2.3. Molecular systems may undergo a process known as intersystem crossing (ISC), where the spin manifolds of the excited state are exchanged. This process is mediated through spin-orbit coupling between the two states. In this way, an excited fluorophore can change from a singlet to a triplet; this process is most likely to occur when the vibrational levels of the two states overlap. In organic molecules consisting mainly of atoms with small atomic mass, the process of spin-orbit coupling is relatively weak. De-excitation to the ground state from a singlet state is the most common form of radiative decay, and occurs on the order of « 10-9 s. This is known as fluorescence. Transitions from the triplet state are known as phosphorescence, and take orders of magnitude longer to decay. Most phosphorescent decay paths are on time scales of « 10-6 - 1s. Since the lifetimes of the two states are so dissimilar, the triplet state is known as a "dark" state, due to the relatively long lifetime. S2 19 2.1.2 Franck-Condon Principle The Franck-Condon principle is the collective name given to the physical explanation describing the probability of transitions in molecules developed by James Franck and Edward Condon [10-12] in the 1920's. It is used to explain the principles behind vibronic (electrical plus vibrational) transitions in molecules due to either the absorption or emission of a photon. The principle rests heavily on the Born-Oppenheimer approximation [13], which allows for a decoupling of the motion of the electrons from the vibrational motion of the nuclei of the molecule. This approximation is valid due to the approximately four orders of magnitude difference in mass between electrons and atomic nuclei. As a result, during electronic transitions the positions of the nuclei of the molecule remain unchanged, and readjust only when the electrons have adopted their final distribution [6]. Another assumption of the principle is that the processes involved happen at a low enough temperature. The consequence of this assumption is that in the principal electronic state, higher level vibrational energy levels are not occupied, and optical transitions occur from the ground vibrational state. This assumption is valid for the reason that thermal energy (kT) at room temperature is approximately an order of magnitude below the energies required for carbon-carbon double bond stretching (remembering that these bonds make up the fluorescing molecule). Finally, the optical transitions between states, as noted earlier, are extremely fast and the nuclei are approximated as stationary during the optical transitions. Due to the increasing anharmonicity of the potential the higher the principal quantum number (electronic state), the potential wells of the excited states in the molecule have a lateral shift when represented with respect to nuclear coordinates. Keeping all of these considerations in mind, the Franck-Condon principle states that during an electronic transition, a change from one vibrational energy level to another will be more likely to happen if there is significantly more overlap between two vibrational wave functions. These optical transitions are then "vertical" transitions, as shown in Figure 2.4. Another aspect of the Born-Oppenheimer approximation is that the total wavefunc-tion can be factorized in an electronic and vibrational part %n(vn), where the latter is the harmonic oscillator function with vibrational quantum number v. To further simplify matters, rotational vibrations are ignored due to their small overall contribution, 20 Absorption Emission LU Energy Figure 2.4. Optical transitions of a fluorescent molecule according to the Franck-Con-don principle. The absorption and emission spectra shown on the bottom is that for AlexaFluor 568 (note that while energy increases to the left, wavelength increases to the right). Adapted from [14]. and the spin wavefunction is neglected. The harmonic oscillator functions of the ground and excited state are defined with respect to different zero-positions of the generalized configuration coordinate q . The transition probability Pv0=0,v1 from the ground state X0( v0 = 0) to a vibrational level v 1 of the excited state S1 is given by where M = er is the electric dipole moment vector, which is found by summing over all electrons. The first term in Equation 2.1 is the squared electronic dipole matrix element, which quantifies the electronic transition intensity of the system. The second term is the Franck-Condon factor that distributes this intensity between different vibrational states [15]. Following excitation to an excited electronic state, the electron will relax down to the ground vibrational state as per Kasha's rule [9]. Radiative relaxation of the fluo- (2.1) Franck-Condon factor 21 rophore then occurs from the vibrational ground state v1 = 0 to any vibrational state of the ground electronic state, with the transition probabilities the highest in overlapping vibrational states. This feature is analogous to excitation, where the electron can be excited to any vibrational state of the excited state. This has the effect of causing symmetry between the excitation and emission spectra of the fluorophore, as is shown in Figure 2.4. In most cases, the molecule dissipates energy via vibrational relaxation or energy transfer, and the emission photon has less energy than the excitation photon. This shift in energy between the excitation and emission photons is known as the Stokes shift [2]. This spectral shift between excitation and emission photons is what enables the physical separation of excitation and emission light in microscope systems, with the use of wavelength selective mirrors and filters. In summary, it is the ability to couple fluorescent molecules to particular proteins of interest in biological samples in addition to the high sensitivity enabled by the Stokes shift in emission that gives optical microscopy its high degree of chemical specificity. In principle, any protein that can be identified can be tagged with a fluorescent marker, and its physical distribution within the biological sample (cell, organism) can be investigated and studied. However, optical microscopy suffers from a fundamental drawback, one that prevents the direct imaging of proteins on physical relevant length scales. This drawback is the diffraction limit. 2 .2 R e so lu tion , th e P o in t-Sp re ad F u n c tio n , and th e D iffra c tio n Limit Resolution, in strictest terms, is the ability to resolve detail in an image. For astronomers, this could refer to the ability to distinguish between two stars or galaxies in close proximity. For the biologist, this could refer to two structures or two proteins close to each other within a sample of interest, or cell. With regard to imaging systems, a clear distinction must be made between magnification and resolution. Magnification refers to how much the final image is magnified with respect to the initial image. However, without a high level of resolution, continual magnification of the image will at some point produce no new information nor detail; the image can be further enlarged, but no new structural content will become apparent. At best, magnification preserves the initial resolution of the image, but can never enhance it. An example of the difference between 22 high resolution and low resolution may be seen in Figure 2.5. Other terms must also be distinguished from resolution, such as sensitivity and precision. When talking about how sensitive an instrument is, this most often (and the case here) refers to the ability to detect and image a small number3 of fluorophores from a sample, taking into account inherent background signal and noise, but says nothing about the spatial resolution of the image itself. Sensitivity most often refers to the ability to detect even extremely small numbers of photons. Finally, the term precision means the ability to pinpoint the exact spatial location of a given fluorophore, but says nothing about the the ability of the system to resolve the overall distribution of fluorophores within a sample. Resolution is more about the relationship and spacing between distinct features of a sample, not the exact position of a single object or fluorophore [16]. This topic will be further touched upon in Chapter 3. 2.2.1 The Diffraction Limit In optical microscopy, light is collected from the sample through the optical system of the microscope where it is imaged onto some form of a photon counting device, such as a camera by means of another lens. The most important element of a microscope is the objective, which sits right next to the sample.4 The most important parameter of the objective is the numerical aperture, which is directly related to light collecting ability of the lens. Numerical aperture is function of the index of refraction through which the light is collected 5 and is defined as NA = n sin 8. (2.2) See Figure 2.6(a) for a schematic illustration. The NA of a microscope is directly linked to the value of n, with nair = 1, nwater = 1.33, and noii = 1.51. NA values for modern 3Ideally, even down to a single fluorophore. 4The assumption here is that either an oil or water immersion lens is being used. Air objectives can have longer working distances (the distance form the front of the objective to the focal plane), and be quite a few millimeters from the source. Still, the objective lens is the most crucial part of a microscope. 5The index of refraction n is a measure of a material's interaction with light, namely the electric field of the photon. It is important in optics because it is correlated with a material's ability to bend light rays. Technically, the index of refraction is a measure of how fast light travels through a medium, (v), compared to the speed of light in vacuum, (c). n = c / v. 23 Figure 2.5. Resolution target illustration. A representation of a publicly available Siemens star, which is commonly used to determine the resolution of a system. The star in part (a) of the figure shows a high degree of detail, and has a high resolution. The star in part (b) shows a much lower level of detail, and consequently has a lower resolution. The structural information in (a) is higher than in (b). objectives range from « 0.1 to 1.65. High-end microscopes tend to use either water or oil objectives to maximize the light collecting ability of their systems. The overall quality of the image is fundamentally limited by the diffraction limit. As is illustrated in Figure 2.6, the ability of an imaging system to resolve fine detail is limited by how well light can be focused by a lens. The tighter the focal spot, the higher the resolution of the image produced. The actual spatial extent of the focal spot is referred to by the term point-spread function, or simply the PSF The PSF can be described in terms of an excitation PSF (where an incoming light source is focused onto a sample) or the emission PSF, which is the image of an optical source on the detector. Unless explicitly stated otherwise, the term PSF will refer to the emission PSF. Classically speaking, light is an electromagnetic wave governed by Maxwell's equations. As such, the minimal size of the focal spot produced by a lens is never infinitely small. Thus, even when light is emanating from a point-like source, the resultant image will have a finite size (hence the name point-spread function). This was first investigated by Ernst Abbe, and the limit of a microscope's resolving power to this day bears his name [17, 18]. The term Abbe limit refers to the resolving power of a microscope and the smallest spatial distance two points can be apart and still be discernible as distinct 24 / k V I Figure 2.6. Figure illustrating the parameters of a focal spot. (a) A ray tracing schematic for a simple lens. If a collimated beam of light (parallel to the optic axis, dashed line) is incident onto the lens, then the ray bundles will converge at a distance f , or the focal distance, from the lens. This is the focal point of the lens. The angle 9 denotes the maximum angle of rays converging to the focal spot. (b) Each ray bundle carries its own wave vector, denoted k . In cylindrical coordinates, k maybe decomposed to k z and kp. Each wave vector carries uncertainty about its absolute value. This will be discussed further in Section 2.2.2. objects. For light of wavelength A, the Abbe limit is mathematically given by Photons in the visible spectrum have a wavelength from 400 nm (violet) to 800 nm (red). As an example, if 500 nm light is imaged through an objective with an NA of 1.49, the resolution limit will be « 168 nm. However, this is misleading because this assumes a perfect imaging system with no noise or distortions, which is never the case.6 Practically speaking, the resolution limit in optical systems is « A/2. The resolution limit can be further examined using the Heisenberg uncertainty relationship [19, 20]. In quantum mechanics, the uncertainty relationships are invoked to understand the constraints on a propagating wave function. Similar relationships hold in the case of electromagnetic waves [21], due to the fact that the Fourier-related real-space vector x and the momentum-space wave-vector are a pair of conjugated variA A (2.3) 2.2.2 The Heisenberg Uncertainty Principle 6In wide-field systems, where all of the light is collected and imaged onto the camera, light from either above or below the focal plane contributes to noise within the image, degrading the practical resolution. 25 ables [19, 22]. The optical analog of the Heisenberg uncertainty relation can then be written as [20] AxApx > h, (2.4) where Ax and Apx represent the uncertainty in the position (x) and momentum (px) in the x coordinate, and h is Planck's constant. A complete description of the particle in terms of both its position and momentum is forbidden due to the uncertainty relationship. As can be seen from Equation 2.4, knowledge regarding the particle's position may be gained at the expense of knowing the particle's momentum, or vice versa. Using the de Broglie relationship p = hk, where k is the wavenumber of the particle, k = 2n/A. Using these definitions, the uncertainty relationship may be written as AxAkx > 2n. (2.5) This relationship is more meaningful, since the wave vector k is directly related to the wavelength of light. The wave vectors are shown in the schematic of Figure 2.6. In principle, if it were possible to make Akx infinitely large, then the uncertainty in the spatial resolution Ax could be infinitely small. In a conventional microscope, however, only a small portion of the spread Akx vectors are collected by the objective.7 For a microscope objective of a given NA, only the spatial frequencies between |kx| = 0 and |kx| = n sin8(w/c), will be collected by the objective.8 If the spread of wavevectors, Akx = 2nsin8(w/c) is inserted into Equation 2.5, and the definition of the dispersion relationship for photons is used, w/c = (2n/A); Abbe's formula for the diffraction limit is recovered, A Ax = ----------. (2.6) 2n sin 8 As stated earlier, as well as in reference [23], only a portion of the possible wavevectors reaches the objective. Even if the objective were able to collect the wavevectors over 7A fluorophore can be accurately modeled as a radiating dipole. A radiating dipole produces a complex emission field, composed of the near-field (electric field distribution on length scales smaller than A) and far-field (electric field distribution on length scales larger than A). A microscope objective can never be close enough to collect the near-field distribution, limiting the resolution of the system from the start. This will be discussed in Section 2.3, as well as reference [23]. 8w is the angular frequency of the photons, and c is the velocity of light. 26 the entire range of angle, namely sind ^ 1, only the propagating far-field wavevectors would be collected. The inability to reconstruct an image in perfect detail (a point-source is imaged as a point-source) is a direct consequence of the inability to couple the entire spectrum of wavevectors into propagating waves. The branch of nano-optics called near-field microscopy was developed for solely this reason - to lower the spatial resolution of a system by collecting a larger component of of the possible number of wavevectors from an emitting point source. The mathematical foundation for the radiating optical fields of a single point source may be studied via the concept of the angular spectrum representation. The angular spectrum representation and its role in the diffraction limit is discussed in Section 2.3 , and in references [20, 23]. 2 .3 T he Angular Sp e c trum R ep re sen ta tio n The idea of the Heisenberg uncertainty principle illustrates why it is impossible to form a perfectly tight focal spot when focusing an incoming beam of light by a lens. The diffraction limit may also be approached from the point of view of the source, and how optical fields radiate and propagate from the source to the detector. In such a context, it is helpful to view the propagation of light in a medium through the mathematical concept of the angular spectrum representation. In this representation, an optical field may be described as a superposition of plane and evanescent (exponentially decaying) waves, which are each in turn solutions to Maxwell's equations. Specifically, the angular spectrum representation means simply a series expansion of an arbitrary optical field (coming from some source at some location in space, such as a fluorophore) in terms of plane and evanescent waves, each with a variable amplitude and propagation direction. If the assumption is made that the electric field E(r) is known at any point r = (x,y, z), any arbitrary value along the z-axis may be chosen and the electric field can be calculated in that particular plane. The Fourier transform of the field E is given by E |kx, ky; zj = 4n ^ J J E(x, y, z)e- [kxX+kyy]dxdy, (2.7) where x and y are the Cartesian coordinates and kx and ky are the corresponding spatial frequencies. The inverse Fourier transform may be written as 27 TO E (x,y,z) = J J E [kx, ky; zj e l[kxx+kyy]dkxdky. (2.8) - TO If the assumption is made that the medium in which the optical fields are propagating is homogenous, linear and isotropic, while having no other sources, then the time-harmonic optical field with angular frequency w must satisfy the vector Helmholtz equation (v2 + k2j E(r) = 0, where k is again given by k = (w/c) n. To determine the time-dependent solution E(r,t), the general convention of E(r,t) = !R{E(r)e-iwt} is utilized. Inserting the Fourier representation as described in Equation 2.8 into the Helmholtz equation, along with the definition k z = \j[k2 - k2x - k2j, with 9{kz} > 0 (2.9) the Fourier spectrum E evolves along the z-axis as [23] E [kx, ky; z) = E(kx, ky ;0)e ±ikzz. (2.10) Equation 2.10 illustrates that the Fourier spectrum of E at any arbitrary position along the z-axis, for example in the image plane, can be calculated by multiplying the spectrum in object plane, at z = 0, with the exponential factor e±ikzz. Inserting this result back into Equation 2.8 yields the result for any arbitrary z value [23] TO E (x, y, z) = / / E [kx, ky ;0j e'[kxx+kyy ±kzz] dx dy (2.11) -TO If the optical field is propagating in a dielectric medium where no losses occur, then the index of refraction n is a real and positive quantity, which has a direct consequence on the wavenumber kz. This wave vector is then either real or imaginary, which in turn dictates if the exponential factor e ikzz yields an oscillatory function or an exponentially decaying function. Depending on the values of kx and ky, the solutions are either plane waves of the form e±ik|z with the restriction k2x + k2 < k2, or evanescent waves of the form e~lkz||z 1 with the restriction on k of k2 + k^ > k2. 2.3.1 Propagating and Evanescent Waves The angular spectrum is then comprised of a superposition between oscillating plane waves, which propagate into the far-field and can be collected, and exponentially decaying evanescent waves. As demonstrated in Figure 2.7, the larger the angle between the 28 Figure 2.7. Plane wave representation of the angular spectrum. (a) Coordinate system definition of a single plane wave propagating at an angle of 0 with respect to the z -axis. (b) Schematic representation of a plane wave. The wave propagates along the vector k. The spacing of the wave fronts (black lines) are the wavelength A. (c) Schematic illustrating the concept that the transverse (x and y) spatial frequencies of plane waves are dependent on their incident angles. The transverse wavenumber (k2 + k2)1/2 is dependent on 0 and is limited to the range [0 ••• k ]. Plane waves and their propagation direction are depicted outside of the dashed hemisphere, while the projection of the plane waves onto the transverse x -axis is shown inside the hemisphere. As the figures illustrates, for plane waves traveling parallel to the z-axis, there is no modulation along the x -axis. Plane waves traveling parallel to the x -axis exhibit the highest degree of modulation along the transverse axis, up to the wave vector value k . (d) Illustration depicting the spatial confinement in k -space of wavevectors representing plane waves (the interior and boundary of the circle of radius k). Evanescent waves fill the region of space outside the circle. Figure adopted from [23]. 29 k-vector and the z-axis, the larger the oscillations of wavevectors in the transverse plane. Wavevectors propagating along the z-axis have transverse components of k2x + k^ = 0, where plane waves propagating at a right angle to the z-axis have transverse components of k^ + k2 = k2. Evanescent waves then comprise the remainder of the k -space wavevectors. However, these fields are exponentially decaying along the z-axis, and will never be collected by the microscope objective [23]. Viewed in this representation, the diffraction limit of light stems from the fact that wavevectors only up to magnitude k propagate into the far-field; the rest decay as evanescent waves, or are only accessible through near-field interactions.9 In the language of the Fourier transform, a point source in the spatial domain will have infinite extent in the Fourier plane. Evanescent waves serve as a low-pass filter in the Fourier domain, meaning that the reconstruction of the point source in the image plane can never reproduce the original image. Since the Fourier spectrum is incomplete, the reconstructed image cannot be confined as tightly in the spatial domain as the source, because the highest frequencies, those that contribute most in creating the sharpest features of an image, have been lost. This band-width is then further limited by the inability of the microscope objective to collect the remaining spatial frequencies. For examples of techniques designed to increase the resolution of optical microscopy systems by increasing the spatial bandwidth collection ability, the reader is directed to references [24, 25]. 2 .4 T h e Airy P rofile and Rayleigh C r ite r io n The Heisenberg uncertainty relationship and the angular spectrum representation establish why the image of a point-source is much larger than its source. However, it does not describe the shape and mathematical form that the point-spread function will assume. In order to determine the mathematical relationship governing the distribution of light in a focus, consider an ideal dipole source10 located at the focal point of an 9This is the motivation behind the numerous methods of near-field microscopy, or near-field optics, as briefly outlined in Chapter 1. Generally speaking, these methods interact with the near-field directly at the sample, or within a fraction of a wavelength. 10An ideal dipole source is assumed due to the fact that the point-sources used in many microscopy experiments, such as organic dyes, behave very much like an ideal dipole. This is due to the fact that the transition moment within the fluorophore generally has a well-defined direction due to the internal structure of the molecule, as can be seen in Figure 2.2. 30 objective lens with a high numerical aperture, denoted NA. The objective lens will collimate the collected emission light from the dipole source, which will then propagate through space to a second lens, which focuses the light onto the detector surface located at the image plane. Such a setup is illustrated in Figure 2.8. The physical parameters of the system include the focal length f of the objective lens, the focal length f of the second lens (which focuses the emission light onto a detector at a position of z = 0), and the emission source with an arbitrary dipole moment given by \i. The framework for determining the electric field distribution within the focal point of a lens revolves around determining the transformations of the electric fields generated by the source dipole as they propagate through the objective and lens system, and are focused down to the image plane. To begin, the electric field at the position r of an arbitrarily oriented dipole i located at r0 is given by the dyadic11 Green's function12 G (r, d ) [23] E (r) = -^-2 G (r, r0 ) • i (2.12) £0c In this derivation, it is implicitly assumed that the distance from the point source to the objective lens is much larger than the wavelength of the emitted light, which is the optical configuration of any conventional microscope. Under these assumptions, the mathematical framework from Section 2.3 holds. For such an analysis, the Green's function must be evaluated. Since the intensity is the square of the electric field, calculating the transformations of the electric fields from source to detector is required. To simplify the calculation, it is assumed that the source is at the origin, namely r0 = 0. The far-field Green's function G is expressed in spherical coordinates, multiplied by the dipole moment vector i to obtain the electric field. The electric field transformation is then calculated as the fields propagate through the objective and focal lens. This derivation is lengthy, and is given in full detail in 11A dyadic tensor is a second order tensor, and the term is relatively obsolete today, but it is still often used in mechanics and electromagnetism. 12The Green's function is a mathematical construct rendering the electric field at a point r due to a single point source, represented as a vector dipole i located at a position r0. Since the field at a given location r depends on the orientation i , the Green's function must assume the form of a tensor in order to account for all possible physical orientations of i . 31 Figure 2.8. The optical configuration used in the calculation of the point-spread function. The dipole source is orientated in an arbitrary position with dipole moment /d. The radiation from the dipole is collected by objective lens with focal length f , and then focused onto the image plane by a second lens with focal length f', at the position z = 0 [23]. Appendix A. For brevity, the results of the full calculations are given below. The paraxial point-spread function in the image plane for a source dipole oriented along the x -axis is given by E (x, y, z = 0) 2 _ n4 dX NA4 e0nn' A6 M2 2J(2 n p ) (2np) 2 (2.13) where p = MP, with p = y/x2 + y2. The prefactors to this term are just scaling factors to the overall amplitude of the function. The term is square brackets is what determines the actual form of the PSF, which is known as an Airy profile, after George Biddell Airy [26]. The term J 1 refers to the Bessel function of the first kind. This function is plotted in Figure 2.9. Lord Rayleigh used this mathematical representation of the image of a point-source in scalar form (where the vector nature of the source is neglected) to derive his famous resolution criteria [27]. Lord Rayleigh described two separate point-sources as being resolvable if two over-lapping Airy profiles were arranged such that the maximum or peak of one profile was over the first minimum of a second profile. Relating this to the numerical aperture of a microscope objective, this relation is given by A A Ax = 0.61-------- = 0.61-----. (2.14) n sin 9 NA The numerical prefactor in the numerator comes from the value of the first minimum of the Bessel function. This is just a theoretical limit in terms of the resolving power (a) (b) S: Figure 2.9. Illustration of an Airy profile, representing the image of a point-source in a diffraction-limited imaging system. (a) 3D representation of the Airy function. X and Y axis are scaled in Airy units, representing the distance from the peak to consecutive minima. Height represents intensity. (b) 2D density plot of the square root of the Airy function, to highlight the minima of the function. of a microscope, and in practical applications, the resolving power is a convolution of both the resolving optical power of the system, along with noise in the image, optical aberrations, and signal to noise ratio [28, 29]. Thus, the above formulation is more of a theoretical best-case scenario, and not necessarily a practical representation of a microscope's performance. The functional form of the Airy profile to describe the diffraction limit may also be derived by looking at the diffraction of light as it enters a circular aperture (say that of the objective), and the diffraction pattern the light will assume on the image plane. The Huygens-Fresnel principle can be applied over the boundary of the circle, and the summation of the total interference pattern arising from the interference effect over the boundary of the circle yields an Airy profile. Also, the diffraction limit may be viewed through the mathematical formulation of Fourier optics, where the diffraction pattern in the image plane is the Fourier transform of the scattering boundary in the Fourier plane. Taking the Fourier transform of a circular opening again leads to the Airy profile in the image plane. It should be noted, however, that these two approaches ignore the vector nature of the electric field of the incident beam, and therefore imaging of single molecules can lead to deviations from the scalar theory. 33 2 .5 Sum m a ry In conclusion, no microscope, especially an optical microscope, is capable of fully reproducing a point-source as an image. The microscope, and objective lens, collect only a fraction of the information regarding the position of the point-source, since only a subset of the wavevectors have propagated into the far-field. As a consequence, they are only able to partially reconstruct a representation of the point source as an image. However, the optical microscope is capable of a high degree of chemical specificity, particularly when fluorescence is utilized as the optical contrast mechanism, and is able to visualize individual proteins [30-32]. As will be further discussed in the following chapters, this fundamental limit on the resolving power of an optical instrument may be circumvented, allowing for the extraction of spatial features and information below the classical diffraction limit. These new methods rely on nontraditional imaging techniques, and are generally much more complicated than conventional imaging. What is lost in terms of ease of use is gained in the resolving power of such systems. 2 .6 R e fe ren ce s [1] A. H. Coons, H. J. Creech, and R. N. Jones, Experimental Biology and Medicine 47, 200 (1941). [2] J. R. Lakowicz, Principles o f Fluorescence Spectroscopy, Springer Science & Business Media, New York, NY, 3rd edition, 2013. [3] N. Panchuk-Voloshina et al., The Journal of Histochemistry and Cytochemistry: Official Journal of the Histochemistry Society 47, 1179 (1999). [4] C. W. Cody, D. C. Prasher, W. M. Westler, F. G. Prendergast, and W. W. Ward, Biochemistry 32, 1212 (1993). [5] M. Chalfie, Y. Tu, G. Euskirchen, W. W. Ward, and D. C. Prasher, Science 263, 802 (1994). [6] P. W. Atkins and R. S. Friedman, Molecular Quantum Mechanics, Oxford University Press, Oxford, 4th edition, 2011. [7] B. Valeur and M. N. Berberan-Santos, Molecular Fluorescence: Principles and Applications, John Wiley & Sons, Weinheim, 2nd edition, 2013. [8] M. Bixon and J. Jortner, Journal of Chemical Physics 48, 715 (1968). [9] M. Kasha, Journal of Chemical Physics 20, 71 (1952). 34 [10] J. Franck and E. G. Dymond, Trans. Faraday Soc. 21, 536 (1926). [11] E. Condon, Physical Review 28, 1182 (1926). [12] E. Condon, Physical Review 32, 858 (1928). [13] M. Born and R. Oppenheimer, Annalen der Physik 389, 457 (1927). [14] A. Thiessen, The influence o f morphology on excitons in single conjugated molecules, University of Utah, 2014. [15] G. C. Schatz and M. A. Ratner, Quantum Mechanics in Chemistry, Courier Dover Publications, Mineola, NY, 2002. [16] R. Heintzmann and G. Ficz, Briefings in Functional Genomics and Proteomics 5, 289 (2006). [17] E. Abbe, Archiv fur mikroskopische Anatomie 9, 413 (1873). [18] E. Abbe, Journal of the Royal Microscopical Society 4, 348 (1884). [19] E. Stelzer and S. Grill, Optics Communications 173, 51 (2000). [20] J. M. Vigoureux and D. Courjon, Applied Optics 31, 3170 (1992). [21] J. W. Goodman, Introduction to Fourier Optics, McGraw-Hill, New York, NY, 2nd edition, 1996. [22] J. T. Verdeyen, Laser Electronics, Prentice-Hall, Englewood Cliffs, New Jersey, 3rd edition, 1995. [23] L. Novotny and B. Hecht, Principles o f Nano-Optics, Cambridge University Press, Cambridge, 1st edition, 2006. [24] S. Hell and E. H. K. Stelzer, Journal of the Optical Society of America A-Optics Image Science and Vision 9, 2159 (1992). [25] M. G. Gustafsson, D. A. Agard, and J. Sedat, Journal of Microscopy 195, 10 (1999). [26] E. Hecht, Optics, Addison-Wesley, San Francisco, CA, 4th edition, 2002. [27] L. Rayleigh, Journal of the Royal Microscopical Society (1903). [28] R. Heintzmann and C. J. R. Sheppard, Micron 38, 145 (2007). [29] E. Stelzer, Journal of Microscopy 189, 15 (1998). [30] T. Suzuki, T. Matsuzaki, H. Hagiwara, T. Aoki, and K. Takata, Acta Histochemica et Cytochemica 40, 131 (2007). [31] H. Sahoo, RSC Adv. 2, 7017 (2012). [32] K. M. Dean and A. E. Palmer, Nature Chemical Biology 10, 512 (2014). CHAPTER 3 CIRCUMVENTING THE DIFFRACTION BARRIER VIA OPTICAL METHODOLOGIES As was outlined in detail in Chapter 2, fluorescence microscopy offers the biologist an imaging modality that is highly specific and targeted in its labeling of cellular components. For instance, the technique of fluorescence in situ hybridization can detect distinct base-pair sequences on DNA and RNA molecules, while immunofluorescence methods and targeted genetic labeling with fluorescent proteins allow for imaging of distinct targeted proteins within the cell [1]. The impact that these methods have on the research performed in the biological field are extremely evident; most labs use fluorescence microscopy in numerous assays as a means of study and characterization. The result is that fluorescent images appear in a very large fraction of publications and books in biology and its numerous subdisciplines. As was equally evident in Chapter 2, however, is that the classical diffraction limit poses significant hurdles on the technique. While standard lab practices allow for labeling and detection of individual proteins, the disparity between the size of the proteins in question and the diffraction limit of light is two orders of magnitude. Proteins and protein complexes are on the order of a few nanometers for single monomer or dimer proteins and up to tens of nanometers for large complexes [2], while the diffraction limit of light, assuming the best case scenario of a high numerical aperture and low wavelength of light, is on the order of 200 nm at best theoretically, which is extremely hard to achieve in practice. When considering linear relationship between the wavelength and resolution limit, a natural question that arises is why not use shorter and shorter wavelengths. Going to shorter and shorter wavelengths to increase the resolving power in the image means going into the ultraviolet end of the spectrum. This poses two problems in the context of biological investigations. One is that ultraviolet radiation is lethal for cells, and the 36 incident photons carry enough energy to destroy the chemical bonds of molecular components of the cell. The second problem is that the index of refraction, n, of materials, is a function of wavelength A. At the shorter end of the spectrum, many of the properties that a material displays in the visible range drastically change. The lenses of the microscope become opaque, mirrors lose their reflectivity, and the optical transmission becomes extremely limited. Perhaps even more limiting is that the current range of fluorophores contains an electronic structure whose band gap energies lie within the visible spectrum (which, as outlined in Chapter 2, are comprised mainly of tt-bonds) so new fluorescent probes would have to be designed. Consequently, imaging with shorter wavelengths is fairly impractical, although there are a number of recent achievements in this area [3, 4]. Optical microscopy is an invaluable tool in the study of biological systems due to its remarkable level of specificity, regardless of the limitation in the optical resolving power of such systems, codified by Abbe and Rayleigh. The past 15 years, however, have seen a remarkable advancement in the development of optical imaging methodologies, and have seen a sustained and successful effort to push optical microscopy beyond the strict resolution limit into what is now collectively known, for better or for worse, as "super-resolution microscopy." The field of super-resolution was in fact the field of research awarded the Nobel Prize in Chemistry in 2014, with the award going to Drs. Eric Betzig [5], Stefan W. Hell [6], and William E. Moerner [7] for their work to push far-field optical methods below the diffraction limit. The next section will give a brief review of the research in the field of super-resolution microscopy, but will first start with the first method to push past the limit of Rayleigh, that of confocal microscopy. 3.1 Super R e so lu tion Microscopy in its Many Forms Confocal microscopy, as an idea, came about in the late 1950's [8]. In a confocal microscope a focused laser beam is scanned through the sample in a predetermined path, and the emission light is collected and directed onto a photon counter. The total image is then built up pixel by pixel as the laser is scanned through the system. The confocal microscope adds two advantages to basic imaging systems. One is the ability to do optical sectioning. This is achieved by placing a pinhole in the emission path in 37 a conjugate image plane, which blocks out-of-focus light. This improves the contrast of the image by rejecting large amounts of background. The second advantage is the fact that the excitation volume within the sample is that of a focused laser beam (the excitation PSF), and only fluorophores within the excitation volume are excited and give off fluorescence. The emission of the sample is then confined to the spatial extent of the excitation focal spot, and the total PSF of the image is given by the product of the excitation with the emission point-spread functions. The end result is that the final point-spread function in the image plane is the square of point-spread function of a conventional image. which leads to a tighter confinement of the PSF. Mathematically, the prefactor of 0.61 in Equation 2.14 becomes approximately 0.4 as a result. The main advantage to the technique, however, is in its optical sectioning capabilities, and while the gain in the lateral PSF is marginal (the largest improvement comes in the axial confinement, which is related to the optical sectioning capabilities of these instruments), it still represents a method to move beyond what is otherwise considering the "conventional" case of the diffraction limit. One of the more common usages for confocal microscopy is in two-photon systems, where the confinement of the excitation beam is required to generate nonlinear optical responses from the sample [9]. Due to the nonlinear nature of this method, this leads to a more tightly confined emission PSF than in single-photon systems1 Two other methods that push beyond the classical diffraction limit deserve mention as well. These methods are similar in the sense that they increase the effective numerical aperture of a system by using a dual-objective configuration, and placing the sample between the two. 4Pi microscopy is implemented in a confocal arrangement [10-12], as discussed above, while I5 microscopy is a wide-field configuration [13]. The main resolution improvement for these systems is in the axial direction due to the collection of emission from an opposing objectives on both sides of the sample. Since the effective NA of the system is doubled, the overall resolution of the system is lowered to « 100-150 nm, depending on the sample. These systems are incredibly challenging to build and 1The main confinement of the emission PSF is in the axial direction, since only areas of high intensity undergo a two-photon absorption process. This confinement is slightly offset though by the fact that the excitation wavelength is twice that used in single-photon systems. 38 maintain in alignment, require the sample to be contained between two objectives, and are not commonly used except in specific research settings. 3.1.1 Optical Super-Resolution - Moving Beyond Abbe's Limit As stated above, the academic community has seen remarkable advancement in the development of far-field optical methods to push further past the conventional diffraction limit of Abbe and Rayleigh. While the conventional limit of optical image formation remains in place, methods to circumvent the diffraction barrier and extract information from spatial dimensions below the diffraction limit have become routine, through various methodologies. This section will give just a cursory introduction to the methods available to optically resolve beyond the classical diffraction limit, and directs the reader to references [14-20] for in-depth optical super-resolution reviews. Broadly speaking, optical super-resolution methods that have been developed in the past generation may be broken down into three categories: structured illumination techniques, point-spread function engineering techniques, and localization, or pointillist, techniques. 3.1.2 Structured Illumination Structured illumination is able to circumvent the diffraction barrier by illuminating the sample with an illumination profile that is harmonic in nature - usually the incident light is passed through a grating before the sample. This way, an illumination field with a distinct frequency in its illumination profile is created. This spatially varying harmonic signal is then scanned over the sample, in multiple positions and at multiple angles, and the characteristic fluorescence signal as a function of the position and orientation of the fringe pattern is recorded. Through the analysis of the signal variation as a function of the fringe location and orientation, structural features below the diffraction limit are obtained. The basic concept of the technique is that it expands the available spatial frequencies that are imaged due to the inherent spatial frequency embedded within the excitation profile. A schematic of the illustration in the frequency domain is shown in Figure 3.1. Each orientation and position of the periodic illumination profile extends the domain of a particular set of spatial frequencies. Multiple positions of the structured illumination profile are needed to expand this spatial frequency profile isotropically. Further information on this technique maybe found in references [21-24]. Theoretically 39 Figure 3.1. Concept of structured illumination. (a) As was seen with the angular spectrum representation, the set of observable spatial frequencies that propagate into the far-field are given by the illustrated circle with radius k0. (b) In structured illumination, the excitation light contains contains spatial frequency k1, and these higher frequencies maybe visible as noire fringes, as seen by the hatched circle. This region in the frequency domain has the same shape as the conventional case, but is now centered at k1. The maximum spatial frequency that can now be observed in the image plane is now given by k0 + k1. Figure adapted from [21]. and experimentally, structured illumination enables a two-fold increase in resolving power over conventional systems. 3.1.3 STED Microscopy STED microscopy, which stands for stimulated emission depletion microscopy, is a variant on the conventional laser scanning confocal technique. The resolving power of confocal microscopy is a function of how tightly the incident laser beam can be focused, which is limited by the focusing ability of the lens. The basic idea behind STED systems is to scan two beams over the sample. The first beam, labeled "Exc. PSF" in Figure 3.2, is a conventional focused spot excitation PSF. The second beam, the STED beam, is passed through a variable phase plate (called a vortex phase plate) such that the beam experiences destructive interference at the center of the beam profile (hence the point-spread function engineering). The phase plate is constructed such that the phase front of the laser beam undergoes a 0-2n [26] modulation in the azimuthal coordinate 0; every portion of the wave front is n radians out-of-phase with the diametrically opposing portion 40 Exc. PSF STED PSF Eff. PSF 1 + j = \ 200 nm / \V \ 40 nm / jm a x S T E D Figure 3.2. Concept of STED microscopy. The excitation point-spread function (Exc. PSF) is of the proper wavelength to stimulate the fluorophores within the diffraction- limited PSF from S0 to the first electronic excited state, S1 (see Figure 2.4 for reference). The STED PSF, which has a null, or region of zero intensity, at the center, is red-shifted such that it will cause the fluorophores within its spatial extent to undergo stimulated emission down to the ground state, before the molecules can fluoresce spontaneously. The wavelength is selected such that the transition is from the ground vibrational state of S1 to an excited vibrational state of S0 , usually at the lowest energy region of the emission spectra. It is also chosen to have no overlap with the excitation spectra. The effective PSF (Eff. PSF) is then only composed of fluorophores that were at the null of the STED PSF, thereby creating an effective fluorescence region that is smaller than the diffraction limit. It is important to note that both the excitation and the STED PSF are diffraction-limited. By increasing the value of , the effective PSF can be made smaller and smaller. Figure adapted from [25]. of the wavefront, producing a null at the center of the beam.2 The null is maintained as the STED beam is focused, and it is aligned to be centered onto the peak of the excitation PSF (so they are collinear). The wavelength of the STED beam must be carefully considered. It is red-shifted to match the emission wavelength of the fluorophore used in the sample, but chosen such that it has zero overlap with the absorption profile.3 When excited molecules are 2The vortex phase plates are generally made via optical lithography, where the plate becomes progressively thicker in the azimuthal coordinate, effectively increasing the optical path of the beam as a function of i.e., e l$. The beam then experiences destructive interference at the center, creating a doughnut shaped beam. It should be noted that these types of vortex phase plates are designed for beams that have circular polarization. 3Generally, STED beams are of extremely high intensities (in the MW/m2 range and higher). If any portion of the absorption spectrum of the fluorophore overlapped with the wavelength of the STED beam, this would also cause molecules to transition from the ground electronic state to the first electronic excited state. 41 then illuminated by the STED beam, they are forced from the first excited electronic state back down to the ground state by the process of stimulated emission. Being stimulated emission, these photons propagate spatially and in phase with the depletion STED beam.4 Only molecules at the center of the STED beam, where the intensity distribution is zero, are left in the excited state, where they then decay and fluoresce spontaneously, creating an "effective PSF." This is illustrated in Figure 3.2. STED microscopy is merely a mechanism to spatially switch off fluorophores, and limit the confinement of active, fluorescing molecules. As an example, the excitation and STED beams can be numerically calculated [27] in the sample plane, and the effective PSF calculated as a function of the STED intensity ISTED and the saturation intensity I0. This is demonstrated in Figure 3.3. The calculations for the spatial extent of the focused beams follow a similar derivation as given for a single emission dipole as outlined in Appendix A, only instead of the calculating the electric fields produced by a radiating dipole and determining their transformation through the optical system, the electric fields of an incident laser beam are used, which produces similar results. The wavelength of the excitation beam is chosen to match the absorption profile of the fluorophore, while the STED wavelength is chosen to overlap with the emission profile, to generate stimulated emission. This is shown by the difference in the colormaps of the excitation PSF and the STED PSF in Figure 3.3. As can be seen from the image, while the excitation PSF and the STED PSF are both diffraction-limited, the region of allowed fluorescence, Figure 3.3(c), can be sub-diffraction-limit in size. Functionally, the effective resolution of a STED microscope (the effective PSF) is a function of the power of the STED beam as well the saturation intensity of a given fluorophore used in a sample. The saturation intensity is defined as the required intensity of the STED beam such that the rate of induced stimulated emission of the fluorophore by the STED beam is equal to the rate of spontaneous emission [29]. Conversely, this may also be stated at the intensity required such that the probability of fluorescence emission from the fluorophore is reduced by a factor of two [30]. At this saturation threshold, the 4Since the STED beams are extremely high power, back-reflections from optical components within the system and from the sample must still be filtered out of the emission path with the appropriate emission filters. Often times, even two identical emission filters are used to achieve adequate signal-to-noise ratios. 42 Figure 3.3. Numerical simulations [27, 28] illustrating the excitation ((a), blue), STED ((b), red) and effective ((c), green) PSFs in STED microscopy. For these simulations, NA = 1.4, n = 1.51, Aex = 635 nm, ASTED = 760 nm, excitation intensity Iex = 1 MW/m2, STED intensity ISTED = 10 MW/m2, and the saturation intensity I0 = 1 MW/m2. Scale bar: 200 nm. intensity of the STED beam, ISTED, is equal to the saturation intensity, I0. The effective resolution in a STED system can be written as a modification of Abbe's criterion, taking into consideration the properties of a particular fluorophore and a given STED beam intensity. This modification can be expressed mathematically by [25, 31] A (3.1) Imax 2n sin 1 + a fTf ISSTED where a f is the absorption cross-section of the fluorophore, Tf is the lifetime of the excited state, and ISTED the peak power of the STED beam. The product a fTf is inverse of the saturation intensity, I0 [29], and Equation 3.1 can be rewritten as a ratio of the STED and saturation intensities A Ax (3.2) 2n sin /1 + ISTED/I0 Since the focused beams are still diffraction-limited, the effective area in which molecules are forced down to the ground state becomes larger and larger as the intensity of the beam is increased, thereby decreasing the effective size of the null at the center, lowering the effective PSF of the system. Theoretically, the resolution of STED microscopy can be decreased to zero, and for certain implementations, has been experimentally verified down to a few nanometers [32].5 Practically however, the resolution is dependent 5The sample in reference [32] was a negatively charged nitrogen vacancy point defect in diamond, 43 on the sample in question and the fluorophores used. Typical values are in the 30-50 nm range for well-aligned systems with the appropriate fluorophores and high-intensity laser systems. As a final example of the dependence on resolution as a function of the ratio of the intensity of the STED beam to the saturation intensity of the fluorophore, Figure 3.4 illustrates the effect of lowering the saturation intensity I0 , while keeping the STED beam intensity the same. The effective PSF of the system, shown in green, decreases as the saturation intensity I0 of the fluorophore is lowered by an order of magnitude in each panel. Objectively, the same effective PSF could be obtained by increasing the intensity of the STED beam by orders of magnitude.6 Further reading on STED microscopy can be found in references [6, 33-36]. 3 .2 L o ca liz a tion M icroscopy The key attribute in both structured illumination and STED microscopy is the fact that fluorophores are selectively illuminated. The sample is either illuminated with a periodic excitation profile in structured illumination, or the region of fluorescence is confined by engineered focal fields. This idea of isolating subsets of the entire fluo-rophore population may be taken to its logical extreme, namely if there were only one fluorophore, or point source, within the sample. As described in detail in Section 2.4, the mathematical form of the image of a point-source is given by an Airy profile. Imaging systems, however, are pixelated detectors with finite sampling abilities, and so the actual image of a point-source is a pixelated version of the mathematical model. Figure 3.5(a) shows the ideal version of a 2D Airy function, while Figure 3.5(b) shows an ideal image of a point-source on a pixelated imaging system. Furthermore, an image of a point-source is corrupted by the statistics involved with the process of photon counting, which are Poisson distributed (shot noise), as well as by read-out noise from the imaging system, which is Gaussian distributed [37, 38]. and the STED beam intensity can be increased to the maximum available power without destroying the sample. 6If the aim is to image a biological sample, however, the lower the amount of light going into the specimen, the better. 44 Figure 3.4. Various profiles along the x -axis of the PSFs in the case of STED microscopy for differing values of the saturation intensity, 10. Each plot represented in this figure would represent a unique fluorophore, since each fluorophore has a unique saturation intensity 10 (dependent on the unique absorption cross-section of each fluorophore and its intrinsic lifetime in the excited state). To benchmark the effect of varying saturation intensities, the excitation intensity is kept constant for every plot. Excitation PSF: blue. STED PSF: red. Effective PSF: green. For all plots, 1ex = 1 MW/m2 and 1STED = 10 MW/m2. (a) 10 = 10 MW/m2. (b) 10 = 1 MW/m2. (c) 10 = 0.1 MW/m2. (d) 10 = 0.01 MW/m2. The intensities on each plot have all been normalized to 10. The same image of a single point-source may be imaged on a three-dimensional plot as well, to illustrate the notion that the center of the image has the highest intensity, which corresponds to the highest number of photon counts. This can be seen in Figure 3.6. While the image of a point-source in the image plane has a much larger spatial extent than the actual point-source itself, the point-spread function still has a well defined peak. This peak has a direct correlation to the source in the image plane, and it is this fact that is the key to localization microscopy. 3.2.1 Information Extraction from the Point-Spread Function If it is possible to isolate a single particle within a certain region of interest, the center of the PSF can be determined to with an uncertainty much smaller the width of the PSF. This fact has been exploited and used to great success in numerous particle tracking experiments [39-43]. If a single point-source can be localized, such as in particle-tracking experiments, or in samples of extremely low density, such that the point-sources are 45 (a) * r 1 »1111 (b) 1 I 1 Figure 3.5. Illustration of an ideal and pixelated point-spread function (PSF). (a) Illustration of an ideal PSF. Recall that the ideal theoretical point-spread function is given by the Airy profile. The first minima of the function may be seen at the outer edges of the image. (b) Result when imaged onto a pixelated detector of finite pixel size with shot noise and readout noise added. Shot noise stems from the inherent uncertainty regarding the photon count and varies by the square root of the number of detected photons. Read noise is the inherent noise added to the signal when propagating through the camera circuitry between detection and signal to the computer. The images have been normalized, with the highest intensity of each image located in the center. The pixels sizes in this image correspond to 75 nm in the sample plane. Scale bar: 300 nm. more than a diffraction-limited distance from their nearest neighbor, then the images can be analyzed computationally to determine the location of the source. The process is then simply reduced to performing a data-fitting analysis on the image of the point-source, and extracting a best-fit estimation for the location of the source. As was demonstrated in Section 2.4, the profile for the image of a point-source is an Airy function. Mathematically, the Airy function can be approximated quite accurately by a Gaussian profile, which is computationally a simpler and more tractable function. The minor differences in the wings of the two functions are generally insignificant in practice, due to image corruption due to noise. Thus, the problem is reduced to determining the peak and the width of the Gaussian distribution. These can be done via numerical fitting methods, using fitting algorithms such as nonlinear least squares or maximum likelihood estimation. Mathematically, the Gaussian function, as an approximation of the PSF, can be ex- 46 Figure 3.6. 3D Surface illustration of a pixelated PSF, as would be recorded on a camera. The height of the surface plot represents intensity; in this case, photon counts. Inset: 2D image of the same PSF. Scale bar: 300 nm. pressed in two dimensions as - (x-xo)2 ^ - (y-yo)2 f (x) = A • e a 2a2y + B, (3.3) where A is the amplitude of the PSF, x0 and y0 are the location of the point-source, a x and a y are the width of the Gaussian profile (the standard deviation) along the x and y axes, respectively, and B is the background. In practice, for a well-aligned microscope free of any astigmatism in the imaging optics, o x and a y can be considered equal, and just denoted as a. Data fitting leads to an estimation of the source position x0 and y0, albeit with uncertainty in the estimation. Furthermore, the finite pixel size of the detector, shot-noise in the image, and background noise must be considered. The photon shot-noise limiting case occurs when the dominant noise in each pixel is due to photons originating from the sample. The background limited case is when the dominant noise in the image is 47 due to other signal not from the source, such as stray light, readout error in the photon-detector, and dark current noise. In estimating the position of the source for the shot-noise limited case, the best estimate of position is given by the average positions of the individually detected photons. For the one-dimensional case along the x -axis, the uncertainty in the estimation is common statistical formula for the standard error in the estimation of the mean [44], i.e., where Ax is the error in the localization, o is the standard deviation of the Gaussian distribution, and N is the total number of photons collected from the source. Pixelation effects must also be considered, since pixelation results in an uncertainty in the position of the photon within a given pixel. This uncertainty is per photon, and can be added in quadrature to Equation 3.4. A pixel serves as a top-hat filter, which for a pixel of size a, has variance a 2/12. The uncertainty then becomes [45] Pixelation effectively increases the size of the apparent PSF. Considering background noise in the analysis becomes more complicated. An estimation of the effect of back-actual number of photons within a pixel compared to the expected. Finding the con-measured position in terms of x as a function of photon counts N. The derivation is lengthy, and can be found in reference [45]. The result is an extra background dominant term in Equation 3.5, and is given by where b is the background photon count per pixel. Experimentally, the background may be estimated from a frame where no point-sources are present, or from an average value of regions within a frame far from active point-sources. The concept of localization is illustrated in Figure 3.7. The top image is a simulated point-spread function viewed on a camera detector. A best-fit Gaussian is drawn below (3.4) <|Ax )2> o2 + a2/12 N (3.5) ground noise can be made through a %2 analysis, through the disparity between the dition for the minimum of the function d%2/dx = 0 will yield an equation relating the <|Ax )2> o2 + a2/12 8no4b2 N + a 2 N2 ' (3.6) 48 Figure 3.7. Schematic of the concept of localizing on a single point source. The top image is a diffraction-limited PSF as seen on a detector, such as a pixelated charged-coupled device (CCD). The middle image shows a mesh-image of the best-fit Gaussian (in gray) with a width of a. The red Gaussian has a width a = a IVN, where N is the number of photons with the PSF. Bottom image shows the uncertainty in the localization of the point source in the x y imaging plane. The greater the number of photons, the smaller the uncertainty. as a gray mesh surface plot, with standard deviation a , and an uncertainty in the position x0 and y 0 given by a. Superimposed is a second Gaussian with standard deviation a, representing the uncertainty in the localization of the point source position. This second Gaussian is rendered in the bottom part of the figure as a two-dimensional projection in the x y plane. This is the concept of localization microscopy - for a single point-source, the location of the source may be inferred from the point-spread function to an accuracy potentially far below the diffraction limit. Depending on the fluorophores used and background values, localization methods for bright probes, which are usually used in particle tracking experiments, can yield results in the few nanometer range [41, 42]. Typical values in realistic localization microscopy methods on densely labeled biological 49 samples are in the 20-30 nm range. Thus, the precision with which individual fluorescent probes can be localized is typically an order of magnitude below the conventional diffraction limit. However, the field of view of microscopes is much larger than a single point-spread function. Thus, more than one point source may be within the field of view, and if the point sources are far enough apart, their point-spread functions will not overlap. Thus, the problem becomes not one of having single sources within the field of view, but of a sparse sampling set. As long as two point sources are more than a diffraction-limited distance away, their PSFs will not overlap in the image plane, and each individual fluorophore can be isolated separately. This point is illustrated in Figure 3.8. While there are multiple point sources within the image, each point source has a distinct and nonoverlapping PSF in the image plane. A small region of interest (ROI) around each fluorophore maybe extracted from the larger data set, the PSF can be localized, and each point source's location can be estimated to an accuracy below the diffraction limit. The question then becomes - how are single fluorophores isolated in a biological sample? 3.2.2 Isolating Single Fluorophores In conventional microscopy techniques, it is impossible to isolate single molecules within a sample with densely packed fluorophores. Illuminating the sample with excitation light causes every fluorophore to react to the excitation field and undergo fluorescence. Even in STED microscopy, the effective focal spot will have numerous fluorophores within it, being an order of magnitude larger than the average size of a protein. Certain methods use the natural blinking states of quantum dots to isolate single emitters in a group ensemble, but these methods are highly impractical for conventional imaging in biological structures [46]. Even a portion of a cell a few microns in diameter can contain thousands and thousands of proteins. What is required in localization microscopy is the ability to control the state of the fluorophores, to be able to turn them from a nonfluorescent dark state to a bright active state in a manner in which only a few fluorophores are in an active state at any given time. The ability to control the activation state of fluorescent proteins successfully came through an engineered variant on the original fluorescent workhorse, GFP [7,47]. Termed photoactivatable-GFP (PA-GFP), this protein initially is in a dark, nonfluorescent state 50 Object Plane Image Plane Figure 3.8. Cartoon schematic illustrating the concept of the diffraction-limit of a sparsely distributed sample. Single point-sources are represented as stars. As the image of the point sources is relayed from the object plane ( ¥ ) to the image plane (^) the diffraction-limi |
| Reference URL | https://collections.lib.utah.edu/ark:/87278/s6z63xb2 |



