The blood vessel segmentation method described in this paper aims at minimizing the false positive rate, while maintaining high accuracy. Though developed for color retinal images, it can be applied to other tree-like vascular images as well. A computer vision approach is devised that mimics the image reading by expert people, based on a two stage process, perception and interpretation. The first stage adopts multiscale filtering to detect objects of different sizes: a two scale Laplacian of Gaussian scheme is used with the related sigma values chosen according to the smallest and greatest vessel widths. An approximate segmentation is achieved simply by means of the Laplacian sign. The interpretation stage is application-specific and accomplishes classification and quantitative analysis. The skeleton of the binary structures is subdivided in vessel segments, their features (position, orientation, length and width) are fed into an artificial neural network, after back-propagation training. The segments classified as vessels are assembled into the vascular tree by rule-based tracking. Results are evaluated on STARE and DRIVE data bases. Accuracy is 95% and the false positive rate is decreased to about 1%. The application on fundus images of infants with retinopathy of prematurity is also described.
As average life increases, pathologies like age-related macular degeneration are becoming more frequent; on the other hand, the progressive improvement in neonatal care leads to an increasing of premature infants with lower gestational age and weight . Many other pathologies may affect blood vessels in all the body districts, which can be investigated by a wide range of angiographic modalities. In order to support medical research and routine examinations, usually carried on through visual inspection of angiograms, many computer methods have been developed (recent reviews cite over 200 papers [2,3]): they aim to early diagnosis, accurate quantification, treatment and surgery planning. Moreover, vessel analysis can be useful for image registration, as needed in multi-modal imaging and patient follow-up. The first step of these methods is segmentation: vessels are detected among other structures and background. Then, vessel features are measured to quantify and follow up pathological alterations. Typical vessel features are width, length, branching angles and tortuosity. Unfortunately, manual segmentation of vessel networks is too time-consuming and their evaluation by visual inspection is qualitative and subjective; on the other hand, the numerous computer methods developed are not always fully automatic nor they warrant optimal performances. Therefore, vessel segmentation and analysis are still open areas for more research.
Our specific interest regards retinal vessels that may be affected in the same way of the body vasculature and can be observed directly and non-invasively by color fundus photography. To process the retinal images obtained at our institute, a few published methods have been taken into consideration [4-6]. In this activity some difficulties have been met. First, implementation time is long, particularly when implementation details are not all known. Second, also processing time is long for some methods. Last, accuracy depends on image quality: acquisition and pathological artifacts can not be always distinguished by blood vessels, so the analysis results are biased by the false positive rate.
To overcome these difficulties and incorporate together both segmentation and analysis capabilities, a computer vision approach was devised in order to mimic the image reading by expert people, based on both perceptual and cognitive stages. It is well known that low level vision (perception) adopts multiscale filtering , which enhances and detects objects with different sizes over background. Both vascular and nonvascular structures are detected at this level, unless some vessel model is used. Considering vascular trees, i.e. vessels pruning from a known starting point or region, a tracking process can in principle avoid over-segmentation and false positives. Hence, low level segmented structures are skelotonized and partitioned into single segments, by using terminal, bifurcation and crossing points; vessel segments are measured and their features are used for classification (vessel/non-vessel) and for tracking the vessel tree from the starting location.
To this aims, a supervised artificial neural network (ANN) mimics the cognitive image interpretation step and a rule-based algorithm is used for tracking. Relevant knowledge has been previously incorporated in the ANN system by means of a training procedure. In this way, a data-driven step extracts vessel features, then a knowledge-based step accomplishes both vessel segmentation and analysis.
2.1 Previous work:
Segmentation methods can be classified by means of several criteria [2,3]. Specifically, vessel segmentation methods can detect whole lumen or its centerline. However, it is difficult to assign a unique classification, because most methods use multiple techniques. In general, they rely on the elongated and branched geometry of blood vessels as well as on their intensity. These structures are enhanced by morphologic operations or differential filtering and then they are detected by thresholding, region-growing, iterative tracking, active contours, pattern recognition or artificial intelligence approaches. In  a rotating 2D matched filtering is used, where vessels are modeled as short, linear, fixed-width segments with a Gaussian profile: among all the rotation angles, the highest response is retained at each pixel and segmentation is achieved by local thresholding. In  the matched filtered image is thresholded to extract seed points that are grown according to heuristic criteria. In  an adaptive local thresholding of the Gaussian matched filtered images is proposed, combined with the response to the first difference of Gaussian filter, in order to reduce the false positives. In  vessels are enhanced by multiscale filtering and then are segmented by region growing, using the histograms of the local maxima of Hessian features. Gaussian filtering features are also classified in vessel and non-vessel according to a ridge-based k-NN classifier  or according to a RAdius based Clustering ALgorithm . Also the Gabor wavelet transform can extract vessel features to be classified . Mathematical morphology exploits features of the vasculature shape that are known prior, such as it being piecewise linear and connected [12,13]. Rather than applying models or filters to the whole image, vessel tracking approaches start from an initial point and detect vessel centerlines or boundaries by analyzing the pixels orthogonal to the tracking direction. Most of these implementations require user intervention for selecting starting and ending points, e.g. . Also robust automatic approaches have been developed, e.g.  detects vessel centerline pixels with tramline filter and then grows vessel segments with an active contour model. However, in fundus images, the optic nerve head (optic disk) is a natural choice as the initial point, since retinal vessels depart from it. Among the various approaches for optic disk detection , only those which detect its location and do not segment its boundary are considered here. In  vessels are segmented through a complex set of rules and optic disk is detected by means of Hough transform. The OD has been localized according to brightness, shape and size, e.g. as the area with the highest variation in intensity: unfortunately, these approaches fail in presence of pathological artifacts. Better approaches are based on the fact that the retinal vasculature originates from the OD following a similar directional pattern: the vessel tree is first detected, then the OD is located by using the concept of fuzzy convergence , the intersection of the two main vascular groups or direction matched filtering . Once vessels are segmented, their length and width as well as other relevant features must be measured. Various analysis papers take their input from a pre-segmented binary vascular image and quantify the parameters of the entire vascular network or those of selected vessels. Other papers report both segmentation and analysis methods [14,16]. Among those dedicated to retinal vessels, in last years there was an increasing interest in developing software packages for the analysis of retinopathy of prematurity (ROP), such as the ROPtools program, based on , the Retinal Image multiScale Analysis (RISA) software, based on  and Computer-Aided Image Analysis of the Retina (CAIAR) program, based on .
2.2 Retinal images:
Many databases of retinal images are now available on-line for validation and comparison of segmentation and analysis methods. Since 2000, the STARE (STructural Analysis Retina) project  has published 20 color fundus images (with 700x605 pixels and 8 bits for each RGB channel) related to 10 normal eyes and 10 pathological ones. The DRIVE (Digital Retinal Images for Vessel Extraction) project has published its software as well as 40 color fundus images (with 565x584 resolution) divided into training and test sets , which contain only seven eyes showing mild retinopathy. A more complete pathological database is DIARETDB1 ( Diabetic Retinopathy DataBase): this project is concentrated on pathological signs without considering retinal vessels. Another database of fundus images is available together with retinal vessel profiles for vessel width measurement . In the present work, both STARE and DRIVE databases have been used for comparison purposes, as they provide also the vessel segmentation of two experts. Besides, a consecutive case series of eleven eyes of seven premature infants have been also examined; digital 640x480 images were acquired with 130 degree RetCam II Imaging System (Clarity Medical Systems, Pleasanton,CA) according to standard ROP protocol.
2.3 Present work:
2.3.1 Image pre-processing and segmentation of blood vessels:
Even if routine fundus images are color photographs, appearing reddish, the green channel of the RGB-representation (illustration 1a) is given as input to the processing steps, as it has the best contrast: in fact the green light is highly absorbed by blood, while the blue light is absorbed and dispersed equally in the eye and the red light has the least absorption and is reflected mainly by the choroid . In this work, retinal images have been enhanced by Laplacian of Gaussian (LoG) filtering, implemented in its fast, separable form, as mono-dimensional convolutions, instead of 2D convolution, according to Marr & Hildreth. A resulting image is shown in illustration 1d, together with the results achieved by rotating 2D Gaussian matched filter  (illustration 1b) and multiple oriented Gabor wavelets  (illustration 1c) for a comparison purpose. Gaussian filters can be tuned by varying the sigma parameter and they respond to vessels as well as to spurious objects, so that the extracted features should be examined by a subsequent classifier. On the other hand, vessel width varies from tens of pixels to sub-pixel level. It is well known how such filters preserve the objects of size tuned to their scale, whereas other objects are smoothed and smeared, till to cancel the finest ones. Therefore, instead of choosing a unique scale, multiscale filtering  has been adopted to simulate the visual perception of human observers: edges are detected at a coarse scale and then are localized at the finest scale. The greatest sigma value has been chosen equal to 3 for public database and 2 for our ROP image set; our smallest sigma value is 1. The window size, w, is the superior odd number computed with this standard formula: w = 2?2·3? .
The outputs of LoG filters with these scale values are very similar to the response of a Gaussian matched filter, because they extract the vessels with their whole width. Using smaller sigma values or applying these values to higher resolution images (or to larger vessels) results in extracting the edges of vessels rather than whole vessels.
Unlike other enhancement methods, LoG filtering is fast and does not need further processing for vessel segmentation, such as region growing, thresholding or classification algorithms. In fact, it allows segmenting the vessels simply by the output sign. Two dimensional Laplacian changes sign across boundaries and therefore segmentation is achieved by a unique threshold: pixels with negative Laplacian are marked as vessels (white), and the other pixels as background (black). From the multiscale outputs, usually, the local maxima over scale are kept pixel by pixel. In our experiments a novel scheme is used to combine the two scale outputs (illustration 2): a bit-wise AND is operated at pixel level, between the thresholded coarse and fine scale Laplacian images. In this way, the small vessels that are detected also at the coarse scale, preserve their original width, whereas the noisy structures detected only at the fine scale are canceled. As a possible drawback of this scheme, larger vessels may exhibit spurious holes, where their width is over the filter scale. Moreover, all filters produce spurious vessel segments: specifically the second derivative calculation results in jagged vessel borders. To avoid artifacts in the subsequent skeletonizing step, simple median filtering is used, as a cleaning procedure on the segmented images, according to the following sequence:
Ir = [(Is1‡ m5) AND Is2] ‡ m3
where Ir is the resulting segmented image (illustration 2c), I?1 and I?2 are the binary fine and coarse scale Laplacian images, which are combined by means of a pixel-by-pixel AND operator; finally, ‡ indicates 2D convolution and m5 and m3 indicate median filters, with 5x5 and 3x3 windows, respectively. Then, a classical thinning algorithm  is used to extract the skeleton of the low level vascular tree (illustration 2d). Thinning deletes border points without destroying connectivity and its result is a connected, topological preserving central axis of the vascular tree. Three types of significant points are detected in the skeleton : terminal points, bifurcation points and crossing points. Based on these points, skeletons are partitioned into single segments. Two neighboring bifurcation points are merged into one crossing points, according to . A tracking algorithm is needed to navigate from a branch to another; beforehand, however, every skeleton branching should be classified to avoid false positives.
2.3.2 Vessel classification and tracking:
Rather than classifying pixel features as vessel/nonvessel, in this work candidate vessels are first extracted by the LoG sign and then classified according to a symbolic representation of vascular tree, as described by human experts: retinal vessels are elongated, connected, dark structures; they originate from the optic disc; they branch repeatedly, with tapering widths; they may cross or touch each other, without looping paths. Hence, unlike other proposed methods, the optic disc (OD) is detected prior to vessel segmentation. To this aim, the brightest and greatest blob has been detected by smoothing the green image and by matching a 2D Gaussian to yield an approximate location of the optic nerve head. A human operator looks at this result and when the brightest blob is produced by pathological structures or when optic disc is not in the field of view, the correct initial location of vessel tree is indicated manually. From experts’ descriptions the most relevant features for classification can be derived. Specifically, each skeleton vessel segment is tracked from an outermost point to the other one, by using a chain code , and the following measurements are made: length, width, inner and outer intensity, position with respect to the optic disc. This analysis module gives a length value, L (along the skeleton segment) and mean and s.d. values of width, Wm and Wsd (evaluated on the local widths estimated at every three skeleton points as the shortest path across the binary vessel). Moreover, an average value of inner intensity, I, is computed through region growing on the green image within each single binary region. To try a distinction between true vessels and false positives, also an outer intensity index, O, is evaluated as follows. The average gray level, O1 and O2, and their s.d. are computed for both the regions besides each candidate binary region, with equal length and half width. If ?=|O1-O2|> s.d., the outer intensity index, O, is set to -?, indicating a possible false positive (e.g. due to OD border, between the bright disc and the dark retinal background); otherwise O is set to (O1+O2)/2. This value indicates a possible true vessel or an artifact between two pathological structures (e.g. exudates) which can be discriminated when the latter are much brighter than normal retinal background. Finally, by using the two outermost points of each segment, two distances from OD, d1 and d2, are computed for each candidate vessel segment. Thus for each candidate vessel we have seven features that are fed into an artificial neural network (ANN) to make the classification. Among the various ANN classifiers, a three layers feed-forward architecture has been chosen and trained through the back-propagation algorithm based on batch gradient descent with momentum. The input layer has 7 neurons, according to the aforementioned feature vectors, and the output layer has one neuron. Both input and output have linear transfer functions. The neurons of the hidden layer have log-sigmoid transfer functions and their number, n, has been empirically determined, in order to maximize the mean square error during training and testing as well as to improve generalization and avoid over-fitting. To the latter aim, the total number of the vessel segments in the training set is taken much greater than the number of parameters in the network. The training set includes one half (10) of the STARE images and one half (20) of the DRIVE images. The other halves of both databases and our ROP cases are included in the test set. A human expert manually labeled the candidate vessel/nonvessel segments (over 100 for each image); their features were computed, normalized in the range [0,1], and fed into the ANN. The maximal number of epochs was 3000 and the goal mean square error was 0.005. Several experiments were made, by varying the number n of hidden neurons, the momentum value, the learning rate and the heuristics for faster convergence (variable learning rate and resilient back-propagation). Once the best network was trained, it was simulated with the test set for performance evaluation. The binary segments classified as vessels are then assembled into their vascular tree by a tracking algorithm: to this aim, they are sorted based on the distance from the optic disc (OD) and tracked by chain code , starting from their terminal point nearer to OD. According to this distance sorting, the vessel segments are labeled as roots (parents) of partial vessel trees; for each parent, its branches (daughters) are then tracked from the current bifurcation point till to another bifurcation point or till to their terminal point. The daughters of every parent vessel, vk, are numbered as 2vk and 2vk +1 in order to track the tree in either direction. Tracking continues iteratively until no other daughters are detected and no other vessel segment has remained unlabeled and identified as a new parent. The final step of tracking examines again the labeled vessels, trying to connect them if their first terminal point (the terminal point nearer to OD) is within a short distance (r) from the second terminal point (the farther one from OD) of another labeled segment. This tracking is similar to  but it does not need user interaction and the r distance allows to follow interrupted vessels or disconnected branches, which may turn out from image artifacts due to processing or pathology.
Finally, the vessel trees are filled from the labeled skeletons. From these vessel data, further geometrical and topological analyses may be accomplished.
Several experiments were undertaken to devise the optimal settings of the proposed method. Ad hoc software was written using a C++ programming environment and a neural network toolkit, developed beforehand at our lab. The STARE and DRIVE public databases provided a most valuable reference for these activities. Specifically, the proposed method is evaluated by computing sensitivity and specificity at pixel level with respect to manually segmented vessels available in both databases. The first observer segmentation is taken as ground truth. Any pixel which is classified as vessel in both the ground truth and segmented image is accumulated among the true positives; any pixel classified as vessel in the segmented image but not in ground truth is counted among the false positives. To make a comparison with other methods, also accuracy is evaluated and the same regions of interest were used: specifically, the spatial masks available on-line for DRIVE images, whereas the regions of interest for STARE database were generated by the code provided by Soares . The best performance of Gaussian smoothing for optic disk matching was obtained with a sigma value of 5. In DRIVE database, OD has not been correctly detected in 3 out of 40 images (7.5%); in STARE database the undetected ODs have been 5 out of 20 (25%), because of the more severe pathological signs. Segmenting with two scale LoG and partitioning of its skeletons produced at least 2000 candidate vessel segments for every image, which usually were reduced to less than 500 after artificial neural network classification. In the first network configuration, the number n of hidden layer neurons was 15. In this training phase, the expert was asked to label the most relevant vessels, i.e. the candidate segments departing from OD and their first order ramifications. Unfortunately, even if the mean square error was very good (2x10-3), several vessels were not classified, giving a low sensitivity (53%). This problem is worsened in ROP images, where most vessels are very thin. Therefore, another session of training was accomplished in order to label also vessel segments of second and third order ramifications. Finally, two ROP images were added to the training set. In summary, the last network configuration has a number n=20 of hidden layer neurons and its training convergence is quickened by using resilient back-propagation. The mean square error of 0.05 was reached within 3000 iterations during training, as well as its value in the testing phase was about 0.012. For a typical fundus image the processing time was nearly one minute and half, included about 40” for tracking and filling, using a PC with 1 GHz clock and 1 GB memory. Examples of segmentation results are shown in illustrations 3 and 4. A normal DRIVE image segmented by the first observer, by the Primitive-Based-Method , and by the proposed method, after the first and last phase of training, is shown in illustration 3: sensitivity is clearly increased from illustration 3c to illustration 3d. Segmentation results for a pathological STARE image, shown in illustration 4a, can be seen for the first observer (illustration 4b), for the Hoover’s method  (illustration 4c) and for the proposed method, labeled as LoG-ANN (illustration 4d). Illustration 5 presents the average values of specificity, sensitivity and accuracy computed on the DRIVE test images by different methods: the performance measures are reported from their original papers for the PBM method , for the second observer (as a measure of human variability), for Mendonça et al. , for Soares et al., Al-Diri et al., for Zhang et al.  and for the proposed method. The same results of these methods applied to STARE database are presented in illustration 6, also given for normal and pathological images. Results of Staal et al. and Al-Diri et al., are reported only in the last part of illustration 6, since they do not make distinction between normal and abnormal cases. Instead, the results of Hoover et al. are reported in illustration 6. It is worth noting that this table is not limited to 10 test images, but it refers to all the 20 STARE images, for comparison purposes. Total accuracy, computed on both databases, was nearly 0.95 ( 0.9485 ± 0.003), which is comparable with the best accuracy values reported in literature. However, the false positive rate was very close to 1%, so that specificity is better than 98%, whereas the values reported in literature are not better than 96 or 97 %. On the other hand, sensitivity is less than 70% and cannot reach the higher values reported in some papers (over 72% for [6,9,12], about 77% and 90% for the second observer in DRIVE and STARE images respectively). Finally, it is worth noting the optimal performance of the proposed method in discriminating between vessels and other spurious structures, as visible in pathological eyes (illustration 4), and in low quality images (illustration 7). With regard to tracking, an exhaustive evaluation has not yet done. Because of segmentation errors or processing artifacts, some vessel segments are missing and some other vessel daughters are incorrectly identified as new parents. However, the last tracking step is able to follow interrupted vessels and to connect some isolated segments to the nearest vascular tree. To this aim the value r = 20 was empirically chosen for the distance parameter.
Computer analysis methods of fundus images aim at reducing the amount of manual interaction and lowering qualitative and subjective evaluation, specifically in the follow-up of patients. As an attempt to face the problems still open to discussion, this paper presents a method for vessel segmentation and analysis, based on a computer vision approach that mimics the image reading by expert people (detection and interpretation). In the first stage, multiscale filtering detects candidate vessel segments of different sizes. Many matched filters can enhance tubular objects and reduce noise at the same time: e.g. Gabor wavelet can be tuned to specific spatial frequencies and has directional capabilities. However, Laplacian of Gaussian filtering has been adopted on both theoretical and practical grounds: it approximates the visual pre-processing at the retina level and is simple and fast. It allows multiscale enhancement as well as segmentation by sign thresholding, provided that its pass-band is tuned to the vessel width. For this reason, the sigma parameter must be tailored to the image resolution and to the expected sizes of vessels. As the second order spatial derivatives extract vessel topology even in presence of intensity variations and low contrast, no preprocessing was used for brightness normalization. The interpretation stage accomplishes quantitative analysis and classification of candidate vessels. In general, quantitative features can be classified by supervised methods, such as Bayes rule , k-nearest-neighbor classifier , clustering algorithm  and artificial neutral network . ANN classifiers were also used for detection and evaluation of retinal lesions . They are relatively simple and very robust, able to incorporate nonlinear mappings between a set of input variables and a set of output variables. The features used in the proposed classifier are pre-selected, based on domain knowledge; there is no selection stage. Training was done in two phases and neural network parameters were carefully tuned, so it may be difficult to apply this method to images with different resolution without re-sampling them. The artificial neural network demands the use of manual labeling and time-consuming training, but the simulation of network is accurate and fast. The success achieved in using simple features, like length, width, intensity, orientation and distance from OD, might support us in trying another classification scheme, based on symbolic rules, even if these features interact perhaps in a complex manner. The use of OD is of particular interest both to classify vessel segments and to assemble the vascular tree. A more robust and fully automated algorithm for OD location is desired in order to avoid the user interactivity in marking that point. Differences between our results and manual segmentation are not too far from the inter-observer variability. Accuracy is better than that of the second observer. Specificity is better than the values reported in literature, on the contrary, sensitivity is worse. However, our true positives refer to retinal vascular arcades and their principal ramifications: to our opinion, evaluation of all the vessels in second and third order ramifications is not necessary for most patients. A low false positive rate in vessel segmentation is very important in applications like ROP, where therapy is decided according to normal or abnormal width and tortuosity of retinal vessels, as shown in illustration 7. In this application it is essential to make these measurements on the whole length of principal vessels, whereas their thinner branching may give minor information. Tracking of classified vessels allows to make tortuosity analysis as well as to refine width measurement. In fact, after LOG segmentation, thin vessels may not preserve their natural tapering. Therefore, further analysis can be achieved on the intensity profiles in the gray level image . Overall, the proposed method exhibits a low computational complexity and a good performance. We plan to use the measurements of vessel width and tortuosity for a more complete validation of the proposed method, as well as to apply this method to other types of vascular images. Preliminary results were obtained in retinal fluoroangiograms and coronary angiograms, whereas the neo-vascularization analysis in corneal photography needs a modification because there is not an unique starting point like OD. Also results on ROP images are encouraging, as ophthalmologists’ evaluation of the width enlargement and the severity of tortuosity is subjective. Therefore, automatic and accurate methods are very useful for the analysis of retinal images at regular time intervals, to evaluate the progression and therapy efficacy.
1. Early Treatment for Retinopathy of Prematurity Cooperative Group. The Incidence and Course of Retinopathy of Prematurity: Findings From the Early Treatment for Retinopathy of Prematurity Study. Pediatrics 2005; 116:15-23.
2. Kirbas C, Quek F. A Review of Vessel Extraction Techniques and Algorithms. ACM Computing Surveys 2004; 36:81–121.
3. Lesage D, Angelini ED, Bloch I, Funka-Lea G. A review of 3D vessel lumen segmentation techniques: Models, features and extraction schemes. Medical Image Analysis 2009; 13: 819–845.
4. Hoover A, Kouznetsova V, Goldbaum M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans Med Imaging 2000; 19:203–210.
5. Martinez-Perez ME, Hughes AD, Thom SA, Bharath AA, Parker KH. Segmentation of blood vessels from red-free and fluorescein retinal images. Medical Image Analysis 2007; 11: 47–61.
6. Soares JVB, Leandro JG, Cesar RM Jr., Jelinek HF, Cree MJ. Retinal Vessel Segmentation Using the 2-D Gabor Wavelet and Supervised Classification. IEEE Trans Med Imaging 2006; 23:1214-1222.
7. Lindeberg T (1994) Scale-space theory in computer vision. Kluwer Academic Publisher, The Netherlands.
8. Chaudhuri S, Chatterjee S, Katz N, Nelson M, Goldbaum M. Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Trans. Med. Imag. 1989; 8: 263-269.
9. Zhang B, Zhang L, Zhang L, Karray F. Retinal Vessel extraction by matched filter with first-order derivative of Gaussian. Computers in Biology and Medicine 2010; 40:438–445.
10. Staal J, Abramoff MD, Niemeijer M, Viergever MA, van Ginneken B. Ridge-based vessel segmentation in color images for the retina. IEEE Trans Med Imaging 2004; 23:501–509.
11. Salem SA, Salem NM, Nandi AK. Segmentation of retinal blood vessels using a novel clustering algorithm (RACAL) with a partial supervision strategy. Med Bio Eng Comput 2007; 45:261–273.
12. Mendonça AM, Campilho A. Segmentation of Retinal Blood Vessels by Combining the Detection of Centerlines and Morphological Reconstruction. IEEE Trans Med Imaging 2006; 25:1200–1213.
13. Zana F, Klein JC. Segmentation of Vessel-Like Patterns Using Mathematical Morphology and Curvature Evaluation. IEEE Trans Image Processing 2001; 10: 1010- 1019.
14. Aylward SR, Bullitt E. Initialization, noise, singularities, and scale in height ridge traversal for tubular object centerline extraction. IEEE Trans Med Imaging 2002; 21: 61-75.
15. Al-Diri B, Hunter A, Steel D. An Active Contour Model for Segmenting and Measuring Retinal Vessels. IEEE Trans.Medical Imaging 2009; 28:1488-97.
16. Youssif AA, Ghalwash AZ, Ghoneim ASA. Optic Disc Detection From Normalized Digital Fundus Images by Means of a Vessels’ Direction Matched Filter. IEEE Trans Med Imaging 2008; 27: 11-18.
17. Wilson CM, Cocker KD, Moseley MJ, Paterson C, Clay ST, Schulenburg WE, Mills MD, Ells AL, Parker KH, Quinn GE, Fielder AR, Jeffrey Ng. Computerized Analysis of Retinal Vessel Width and Tortuosity in Premature Infants. Invest Ophthalmol Vis Sci. 2008; 49:3577–3785.
18. Hoover A, Goldbaum M (2003) Locating the Optic Nerve in a Retinal Image Using the Fuzzy Convergence of the Blood Vessels. IEEE Trans.Medical Imaging, 22, 951-958.
19. Martinez-Perez ME, Hughes AD, Stanton AV, Thom SA, Chapman N, Bharath AA,. Parker KH (2002) Retinal Vascular Tree Morphology: A Semi-Automatic Quantification. IEEE Trans Biomed Eng 49: 912-917.
20. Al-Diri B, Hunter A, Steel D, Habib M, Hudaib T, Berry S. REVIEW - A Reference Data Set for Retinal Vessel Profiles. 30th Annual Int. Conf. IEEE Eng. in Medicine and Biology Society, 2008.
21. Preece SJ, Claridge E. Monte Carlo modeling of the spectral reflectance of the human eye. Physics in Medicine and Biology 2001; 47: 2863–2877.
22. Zhang TY, Suen CY. A Fast Parallel Algorithm for Thinning Digital Patterns. Communications of the ACM 1984; 27: 236-239.
23. Freeman H. On the encoding of arbitrary geometric configurations. IRE Transactions on Electronic Computers 1961, EC- 10: 260-268.
24. Nekovei R, Sun Y. Back-propagation network and its configuration for blood vessel detection in angiograms. IEEE Trans. Neural Netw. 1995; 6: 64–72.
25. García M, López MI, Álvarez D, Hornero R. Assessment of four neural network based classifiers to automatically detect red lesions in retinal images. Medical Engineering & Physics 2010; 32:1085–1093.
26. Lowell J, Hunter A, Steel D, Basu A, Ryder R, Kennedy RL. Measurement of Retinal Vessel Widths From Fundus Images Based on 2-D Modeling. IEEE Trans Med Imaging 2004; 23: 1196-1204.
Source(s) of Funding
This work was supported by a research fund of Florence University.
The authors have no financial or personal relationships with other people or organizations that could inappropriately influence this work. No Ethical Approval is required.
This article has been downloaded from WebmedCentral. With our unique author driven post publication peer
review, contents posted on this web portal do not undergo any prepublication peer or editorial review. It is
completely the responsibility of the authors to ensure not only scientific and ethical standards of the manuscript
but also its grammatical accuracy. Authors must ensure that they obtain all the necessary permissions before
submitting any information that requires obtaining a consent or approval from a third party. Authors should also
ensure not to submit any information which they do not have the copyright of or of which they have transferred
the copyrights to a third party.
Contents on WebmedCentral are purely for biomedical researchers and scientists. They are not meant to cater to
the needs of an individual patient. The web portal or any content(s) therein is neither designed to support, nor
replace, the relationship that exists between a patient/site visitor and his/her physician. Your use of the
WebmedCentral site and its contents is entirely at your own risk. We do not take any responsibility for any harm
that you may suffer or inflict on a third person by following the contents of this website.