macdevcenter.com
oreilly.comSafari Books Online.Conferences.

advertisement

AddThis Social Bookmark Button

Bitmap Image Filters
Pages: 1, 2, 3

The Filter

The operational method in this whole business is filterImage:, which takes a source NSImage as an argument, and returns a new, gray-scale, NSImage. Accomplishing this requires the ability to access the raw data of the source image, run an algorithm on that data, and then store the new pixel values in the returned NSImage. This is over simplified since we don't actually obtain data from NSImage objects, but rather from instances of NSBitmapImageRep.



Recall from the first ImageApp column our discussion about the relationship between NSImage and NSImageRep. NSImage provides an abstraction layer that insulates clients of the class from the details of the image data format and implementation. To maintain the highest level of flexibility, however, Application Kit provides the class NSImageRep, which is a bridge between the insulating abstraction of NSImage and the data-dependent image representations defined in the NSImageRep subclasses.

Accessing the bitmap image representation of the source image is the first thing we need to do, so let's fill in that part of the puzzle, as shown below:

- (NSImage *)filterImage:(NSImage *)srcImage
{
    NSBitmapImageRep *srcImageRep = srcImageRep = [NSBitmapImageRep imageRepWithData:[srcImage TIFFRepresentation]];
    .
    .
    .
}

The first thing we do in this method is retrieve the bitmap image representation of the source image. NSBitmapImageRep provides the convenience constructor imageRepWithData:, which will take as a parameter an NSData object that is formatted as TIFF data. The TIFF data used to create the instance is obtained using NSImage's TIFFRepresentation method; this returns exactly the NSData object we need.

The next step is to set up the new NSImage that will be the destination of the filter operation. This involves not only creating an instance of NSImage to return, but also the bitmap image representation used to assemble the image data. Combining these two pieces is a simple procedure that will be done before returning the NSImage.

In the next piece of code, we'll do what we have just described. Creating an NSBitmapImageRep from scratch is a tedious procedure, so don't be shocked by the size of the code that you're about to see. The initialization method we will use is over 125 characters long; one of the longest method names in all of Cocoa! We supplement filterImage: in the following way:

- (NSImage *)filterImage:(NSImage *)srcImage
{
    NSBitmapImageRep *srcImageRep = [NSBitmapImageRep
                    imageRepWithData:[srcImage TIFFRepresentation]];

    int w = [srcImageRep pixelsWide];
    int h = [srcImageRep pixelsHigh];
    int x, y;

    NSImage *destImage = [[NSImage alloc] initWithSize:NSMakeSize(w,h)];
       
    NSBitmapImageRep *destImageRep = [[[NSBitmapImageRep alloc] 
                    initWithBitmapDataPlanes:NULL
                    pixelsWide:w 
                    pixelsHigh:h 
                    bitsPerSample:8 
                    samplesPerPixel:1
                    hasAlpha:NO
                    isPlanar:NO
                    colorSpaceName:NSCalibratedWhiteColorSpace
                    bytesPerRow:NULL 
                    bitsPerPixel:NULL] autorelease];

    .
    .
    .
    
    [destImage addRepresentation:destImageRep];
    return destImage;
}

Now, let's pick this apart and see what we did. First, we declared some variables that will prove useful in the method. We have the height and width of the source image as well as variables that we will use to specify a position in the image.

Next, we see that we created an NSImage object called destImage. Initializing this object involves specifying only the size of the image, which will be the same size as the source image. NSImage instances allocated and initialized in the manner above contain no image representations. Therefore, we'll add destImageRep to the list of representations in destImage, effectively manipulating destImageRep equivalent to manipulating destImage. Image representations are added to an image by invoking the method addRepresentation: in the NSImage object with the image representation we wish to add as the argument.

Now is the time where we allocate and initialize a new NSBitmapImageRep object. An image's data is simple, but the variety of data formats and organization requires a flexible initialization. Let's walk through it argument by argument.

The first argument, BitmapDataPlanes:, allows us to provide the object with memory space, which is set up to store planar bitmap data. Raw image data is nothing more than a sequence of byte-size elements that store the value of each sample of each pixel. A sample is a color component for a pixel, such as red, green, blue, alpha, cyan, and so on. When we create an NSBitmapImageRep we have the option of specifying one of two organizational schemes for the data: whether it is planar data, or interlaced data.

When an image's data is planar, there is a separate array in memory, or plane, for each color sample. That is, in an RGB image, there is a separate array of all of the red, green, and blue samples. We can create these arrays ahead of time and pass an array of pointers to this first argument. This array has as many elements as there are color components and each element is a pointer to the head of each of the data arrays. It is type unsigned char **.

Rather than specifying these data arrays, we can pass NULL, which means that the data will be interlaced. In this scheme there is one array that contains all the data for the image. The data is arranged such that the color samples for the same pixel are adjacent in memory. For example, the first three elements of the array would be the values of the red, green, and blue samples for the first pixel. The next three bytes of the array would be the same for the second pixel, and so on. We will be working with interlaced data.

The next two arguments, pixelsHigh: and pixelsWide: let us specify the size of the image. Next, in bitsPerSample:, we specify how large each sample is in terms of bits per sample. This discussion has assumed that each sample is one byte, or 8-bits large, but it is possible for each sample to be 12- or 16-bits. If you had some highly specialized application you are not precluded from choosing some esoteric value for the bitsPerSample:. Next, we specify the samplesPerPixel:. For an RGB image we would pass 3 for this parameter, if we are creating a gray-scale image, as we are here, we pass 1.

Related Reading

Learning Cocoa with Objective-C
By James Duncan Davidson, Apple Computer, Inc.

The combination of these last four parameters allows the class to determine how much memory to allocate to store all of the raw data in the image representation. We can determine the total number of pixels in the image by multiplying the height and the width. We can also determine the memory space needed for each pixel by multiplying the bitsPerSample by the samplesPerPixel.

Next, we indicate whether or not one of the samples specified in samplesPerPixel: is an alpha channel. To keep things simple, we will not concern ourselves with an alpha channel, so we pass NO. Then we specify whether or not the image data is planar. As we know, it will not be, so we pass NO here as well.

Almost there. Next we want to specify the colorspace for the image. A colorspace tells the graphics system how to interpret the image data. One familiar colorspace is RGB colorspace. In this colorspace a color is represented by three color components: red, green, blue. This is our source image's colorspace (we are assuming, for simplicities' sake). Another colorspace is white colorspace, where a color is a shade of gray. In this colorspace each pixel has only one sample. RGB colorspaces can also represent gray--as equal admixtures of the three component colors.

Because we are building a filter that takes a color image and converts it to gray-scale we will set the colorspace for our image representation to white colorspace, designated in Cocoa by the constant NSCalibratedWhiteColorSpace.

Finally, the last two methods to which we pass NULL allow us another way to specify how much memory to allocate for the image data. The argument bytesPerRow: is equivalent to the value of:

pixelsWide * bitsPerSample * samplesPerPixel / 8

The other argument, bitsPerPixel:, is another way of saying:

bitsPerSample * samplesPerPixel

This is how we make a new NSBitmapImageRep, fully configured for our use. As always, study the documentation for this class as it provides a host of details not covered here. Now that we have that out of the way, let's move on to talk about how we actually convert a color image into a gray-scale image.

Pages: 1, 2, 3

Next Pagearrow