| OCR Text |
Show (3) Pixel Classification: Is based on the number of object pixels classified as background pixels and the number of background pixels classified as object pixels. Let A be the set of object pixels in the groundtruth image and B be the set of object pixels in the segmented image (refer to Figure 4.8(b)). Formally, we have A = [Pi*P2 , - , P a ) = [(Xpi>ypi)AXp2>yp2) > ( x pA, ypA)) and B = {qlyq2, qB) = {{xqX, y q2\ (xq2, y q2), .... (xqB, y qB)}. Since pixel classification must be positive, I define an intermediate value N as follows: = j (n(A) - n(AnB)) + (n(B) - n(AnB)) n{A) where A nB = {(xk, y k), k - 1, ..., m I (xk, y k)e A and (xk, y k)e B }. Using the value of N, pixel classification can then be computed as { N, if N > 0 Pixel Classification = | 0, otherwise. (4) Object Overlap: Measures the area of intersection between the object region in the groundtruth image and the segmented image, divided by the object region. As defined in the pixel classification quality measure, let A be the set of object pixels in the groundtruth image and B be the set of object pixels in the segmented image (see Figure 4.8(b)). Object overlap can be computed as ~ . n(Ar^B) Object Overlap = ----------- n(A) where AnB = {(**, y*), k - 1, ..., m I (xk, yk)e A and (xk, y k)eB) . (5) Object Contrast: Measures the contrast between the object and the background in the segmented image, relative to the object contrast in the 47 |