IMAQ TM IMAQ Vision for Visual Basic User Manual IMAQ Vision for Visual Basic User Manual August 2004 Edition Part Number 371257A-01
Support Worldwide Technical Support and Product Information ni.
Important Information Warranty The media on which you receive National Instruments software are warranted not to fail to execute programming instructions, due to defects in materials and workmanship, for a period of 90 days from date of shipment, as evidenced by receipts or other documentation. National Instruments will, at its option, repair or replace software media that do not execute programming instructions if National Instruments receives notice of such defects during the warranty period.
Contents About This Manual Conventions ...................................................................................................................ix Related Documentation..................................................................................................x Chapter 1 Introduction to IMAQ Vision About IMAQ Vision ......................................................................................................1-1 Documentation and Examples ............................................
Contents Improve an Image.......................................................................................................... 2-9 Lookup Tables................................................................................................. 2-9 Filters .............................................................................................................. 2-9 Convolution Filter............................................................................. 2-10 Nth Order Filter ...................
Contents Set Search Areas ............................................................................................................5-8 Defining Regions Interactively........................................................................5-8 Defining Regions Programmatically ...............................................................5-9 Find Measurement Points ..............................................................................................5-9 Finding Features Using Edge Detection.......
Contents Chapter 6 Calibrating Images Perspective and Nonlinear Distortion Calibration......................................................... 6-1 Defining a Calibration Template..................................................................... 6-2 Defining a Reference Coordinate System ....................................................... 6-3 Learning Calibration Information ................................................................... 6-5 Specifying Scaling Factors ...........................
About This Manual The IMAQ Vision for Visual Basic User Manual is intended for engineers and scientists who have knowledge of Microsoft Visual Basic and need to create machine vision and image processing applications using Visual Basic objects. The manual guides you through tasks beginning with setting up the imaging system to taking measurements. Conventions The following conventions appear in this manual: » The » symbol leads you through nested menu items and dialog box options to a final action.
About This Manual Related Documentation This manual assumes that you are familiar with Visual Basic and can use ActiveX controls in Visual Basic. The following are good sources of information about Visual Basic and ActiveX controls: • msdn.microsoft.com • Documentation that accompanies Microsoft Visual Studio In addition to this manual, the following documentation resources are available to help you create your vision application.
About This Manual • NI Vision Builder for Automated Inspection: Inspection Help—If you need information about how to run an automated vision inspection system using NI Vision Builder AI, refer to this help file. Other Documentation • NI OCR Training Interface Help—If you need information about the OCR Training Interface, refer to this help file. • National Instruments IMAQ device user manual—If you need installation instructions and device-specific information, refer to your device user manual.
1 Introduction to IMAQ Vision This chapter describes the IMAQ Vision for Visual Basic software and associated software products, discusses the documentation and examples available, outlines the IMAQ Vision for Visual Basic architecture, and lists the steps for creating a machine vision application. Note For information about the system requirements and installation procedure for IMAQ Vision for Visual Basic, refer to the Vision Development Module Release Notes that came with the software.
Chapter 1 Introduction to IMAQ Vision In addition to this manual, several documentation resources are available to help you create a vision application: • IMAQ Vision Concepts Manual—If you are new to machine vision and imaging, read this manual to understand the concepts behind IMAQ Vision. • IMAQ Vision for Visual Basic Reference—If you need information about individual methods, properties, or objects, refer to this help file.
Chapter 1 Introduction to IMAQ Vision cwimaq.ocx cwimaq.ocx contains the following three ActiveX controls and a collection of ActiveX objects: CWIMAQ, CWIMAQVision, and CWIMAQViewer. Refer to the ActiveX Objects section for information about the ActiveX objects. CWIMAQ Control Use this control to configure and perform an acquisition from the IMAQ device.
Chapter 1 Introduction to IMAQ Vision niocr.ocx niocr.ocx provides one ActiveX control and a collection of ActiveX objects you use in a machine vision application to perform optical character recognition (OCR). NIOCR control Use this control to perform OCR, which is the process by which the machine vision software reads text and/or characters in an image.
Chapter 1 Introduction to IMAQ Vision Refer to the source code of the CWMachineVision control for an example of how to use the CWIMAQVision methods. Tip ActiveX Objects Use the objects to group related input parameters and output parameters to certain methods, thus reducing the number of parameters that you actually need to pass to those methods. ActiveX objects in cwimaq.ocx have a CWIMAQ prefix, objects in niocr.ocx have an NIOCR prefix, and objects in cwmv.ocx have a CWMV prefix.
Chapter 1 Introduction to IMAQ Vision Set Up Your Imaging System Calibrate Your Imaging System Chapter 6: Calibrating Images Create an Image Acquire or Read an Image Chapter 2: Getting Measurement-Ready Images Display an Image Attach Calibration Information Analyze an Image Improve an Image Make Measurements or Identify Objects in an Image Using 1 Grayscale or Color Measurements, and/or 2 Particle Analysis, and/or 3 Machine Vision Figure 1-1.
Chapter 1 Introduction to IMAQ Vision 2 Define Regions of Interest Chapter 3: Making Grayscale and Color Measurements Chapter 4: Performing Particle Analysis Measure Grayscale Statistics Measure Color Statistics 3 4 Create a Binary Image Locate Objects to Inspect Improve a Binary Image Set Search Areas Find Measurement Points Identify Parts Under Inspection Make Particle Measurements Classify Objects Read Read Characters Symbologies Convert Pixel Coordinates to Real-World Coordinates Chapt
Getting Measurement-Ready Images 2 This chapter describes how to set up an imaging system, acquire and display an image, analyze the image, and prepare the image for additional processing. Set Up Your Imaging System Before you acquire, analyze, and process images, you must set up an imaging system. The manner in which you set up the system depends on the imaging environment and the type of analysis and processing you need to do.
Chapter 2 Getting Measurement-Ready Images color and monochrome devices as well as digital devices. Visit ni.com/imaq for more information about IMAQ devices. 4. Configure the driver software for the image acquisition device. If you have a National Instruments image acquisition device, configure the NI-IMAQ driver software through Measurement & Automation Explorer (MAX). Open MAX by double-clicking the Measurement & Automation Explorer icon on the desktop.
Chapter 2 • Complex • 32-bit RGB • 32-bit HSL • 64-bit RGB Getting Measurement-Ready Images When you create an image, it is an 8-bit image by default. You can set the Type property on the image object to change the image type. When you create an image, no memory is allocated to store the image pixels. IMAQ Vision methods automatically allocate the appropriate amount of memory when the image size is modified.
Chapter 2 Getting Measurement-Ready Images Acquire or Read an Image After you create an image, you can acquire an image into the imaging system in one of the following three ways: • Acquire an image with a camera through the image acquisition device. • Load an image from a file stored on the computer. • Convert the data stored in a 2D array to an image.
Chapter 2 Getting Measurement-Ready Images If you want to acquire multiple frames, set the image count to the number of frames you want to acquire. This operation is called a sequence. Use a sequence for applications that process multiple images. The following code illustrates an asynchronous sequence, where numberOfImages is the number of images that you want to process: Private Sub Start_Click() CWIMAQ1.AcquisitionType = cwimaqAcquisitionOneShot CWIMAQ1.Images.RemoveAll CWIMAQ1.Images.
Chapter 2 Getting Measurement-Ready Images Private Sub Stop_Click() CWIMAQ1.Stop End Sub Reading a File Use the CWIMAQVision.ReadImage method to open and read data from a file stored on the computer into the image reference. You can read from image files stored in several standard formats, such as BMP, TIFF, JPEG, PNG, and AIPD. In all cases, the software automatically converts the pixels it reads into the type of image you pass in. Use the CWIMAQVision.
Chapter 2 Getting Measurement-Ready Images the viewer. You can set the CWIMAQPalette.Type property to apply predefined color palettes. For example, if you need to display a binary image—an image that contains particle regions with pixel values of 1 and a background region with pixel values of 0—set the Type property to cwimaqPaletteBinary. For more information about color palettes, refer to Chapter 2, Display, of the IMAQ Vision Concepts Manual.
Chapter 2 Getting Measurement-Ready Images Use CWIMAQVision.Histogram2 to analyze the overall grayscale distribution in the image. Use the histogram of the image to analyze two important criteria that define the quality of an image—saturation and contrast. If the image does not have enough light, the majority of the pixels will have low intensity values, which appear as a concentration of peaks on the left side of the histogram.
Chapter 2 Getting Measurement-Ready Images Improve an Image Using the information you gathered from analyzing the image, you may want to improve the quality of the image for inspection. You can improve the image with lookup tables, filters, grayscale morphology, and Fast Fourier transforms (FFT). Lookup Tables Apply lookup table (LUT) transformations to highlight image details in areas containing significant information at the expense of other areas.
Chapter 2 Getting Measurement-Ready Images Highpass filters emphasize details, such as edges, object boundaries, or cracks. These details represent sharp transitions in intensity value. You can define your own highpass filter with CWIMAQVision.Convolute or CWIMAQVision.NthOrder, or you can use a predefined highpass filter with CWIMAQVision.EdgeFilter or CWIMAQVision.CannyEdgeFilter. CWIMAQVision.
Chapter 2 Getting Measurement-Ready Images Use CWIMAQVision.GrayMorphology to perform one of the following seven transformations: • Erosion—Reduces the brightness of pixels that are surrounded by neighbors with a lower intensity. • Dilation—Increases the brightness of pixels surrounded by neighbors with a higher intensity. A dilation has the opposite effect of an erosion. • Opening—Removes bright pixels isolated in dark regions and smooths boundaries.
Chapter 2 Getting Measurement-Ready Images 2. 3. Improve the image in the frequency domain with a lowpass or highpass frequency filter. Specify which type of filter to use with CWIMAQVision.CxAttenuate or CWIMAQVision.CxTruncate. Lowpass filters smooth noise, details, textures, and sharp edges in an image. Highpass filters emphasize details, textures, and sharp edges in images, but they also emphasize noise.
Making Grayscale and Color Measurements 3 This chapter describes how to take measurements from grayscale and color images. You can make inspection decisions based on image statistics, such as the mean intensity level in a region. Based on the image statistics, you can perform many machine vision inspection tasks on grayscale or color images, such as detecting the presence or absence of components, detecting flaws in parts, and comparing a color component with a reference.
Chapter 3 Making Grayscale and Color Measurements Table 3-1. Tools Palette Functions Tool Name Function None Disable the tools. Selection Tool Select an ROI in the image and adjust the position of its control points and contours. Action: Click the appropriate ROI or control points. Point Select a pixel in the image. Action: Click the appropriate position. Line Draw a line in the image. Action: Click the initial position and click again on the final position.
Chapter 3 Making Grayscale and Color Measurements Table 3-1. Tools Palette Functions (Continued) Tool Name Polygon Function Draw a polygon in the image. Action: Click to place a new vertex and double-click to complete the ROI element. Freeline Draw a freehand line in the image. Action: Click the initial position, drag to the appropriate shape and release the mouse button to complete the shape. Free Region Draw a freehand region in the image.
Chapter 3 Making Grayscale and Color Measurements 8 1 2 1 2 3 4 3 4 5 6 Anchoring Coordinates of a Region of Interest Size of the Image Zoom Factor Image Type Indicator (8-bit, 16-bit, Float, RGB32, RGBU64, HSL, Complex) 7 5 6 7 8 Pixel Intensity Coordinates of the Mouse Size of an Active Region of Interest Length and Horizontal Angle of a Line Region Figure 3-2. Tools Information IMAQ Vision for Visual Basic User Manual 3-4 ni.
Chapter 3 Making Grayscale and Color Measurements During design time, use the Menu property page to select which tools appear in the right-click menu. You also can designate a default tool from this property page. During run time, set the CWIMAQViewer.MenuItems to select the tools to display, and set CWIMAQViewer.Tool to select the default tool. Defining Regions Programmatically You can define ROIs programmatically using the CWIMAQRegions collection.
Chapter 3 Making Grayscale and Color Measurements CWIMAQRegion contains. When you know the type of shape that the region contains, you can set the region into a shape variable and use that variable to manipulate the shape properties. For example, the following code resizes a rectangle selected on the viewer: Dim MyRectangle As CWIMAQRectangle Set MyRectangle = CWIMAQViewer1.Regions(1) MyRectangle.Width = 100 MyRectangle.
Chapter 3 Making Grayscale and Color Measurements minimum intensity, and maximum intensity. Use CWMachineVision.LightMeterRectangle to get the pixel value statistics within a rectangular region in an image. Use CWIMAQVision.Quantify to obtain the following statistics about the entire image or individual regions in the image: mean intensity, standard deviation, minimum intensity, maximum intensity, area, and the percentage of the image that you analyzed.
Chapter 3 Making Grayscale and Color Measurements Red Green Blue Hue or Color Image Saturation Intensity 32 Hue or Saturation Luminance Hue or Saturation Value 8 8 8 8 8 8 8 8 8 8 8 Red Green Blue Hue Saturation 8 8 8 8 8 8 8 8 8 8 8 8 Color Image Intensity 8-bit Image Processing 8 or 32 Hue Saturation or Luminance Hue Saturation or Value Figure 3-3.
Chapter 3 Making Grayscale and Color Measurements Comparing Colors You can use the color matching capability of IMAQ Vision to compare or evaluate the color content of an image or regions in an image. Complete the following steps to compare colors using color matching: 1. Select an image containing the color information that you want to use as a reference. The color information can consist of a single color or multiple dissimilar colors, such as red and blue. 2.
Chapter 3 Making Grayscale and Color Measurements Specifying the Color Information to Learn Because color matching only uses color information to measure similarity, the image or regions in the image representing the object should contain only the significant colors that represent the object, as shown in Figure 3-5a. Figure 3-5b illustrates an unacceptable region containing background colors. a. b. Figure 3-5.
Chapter 3 Making Grayscale and Color Measurements Using a Region in the Image You can select a region in the image to provide the color information for comparison. A region is helpful for pulling out the useful color information in an image. Figure 3-7 shows an example of using a region that contains the color information that is important for the application. Figure 3-7.
Chapter 3 Making Grayscale and Color Measurements fuses much better and results in high match scores—around 800—for both the fuses. You can use an unlimited number of samples to learn the representative color spectrum for a specified template. 1 1 Regions used to learn color information Figure 3-8. Using Multiple Regions to Learn Color Distribution Choosing a Color Representation Sensitivity When you learn a color, you need to specify the granularity required to specify the color information.
Chapter 3 Making Grayscale and Color Measurements Ignoring Learned Colors You can ignore certain color components in color matching by setting the corresponding component in the input color spectrum array to –1. To set a particular color component, follow these steps: 1. Copy CWIMAQColorInformation.ColorSpectrum, or create your own array. 2. Set the corresponding components of the array. 3. Assign this array to CWIMAQColorInformation.
Performing Particle Analysis 4 This chapter describes how to perform particle analysis on the images. Use particle analysis to find statistical information about particles, such as the presence, size, number, and location of particle regions. With this information, you can perform many machine vision inspection tasks, such as detecting flaws on silicon wafers or detecting soldering defects on electronic boards.
Chapter 4 Performing Particle Analysis If all the objects in the grayscale image are either brighter or darker than the background, you can use CWIMAQVision.AutoThreshold to automatically determine the optimal threshold range and threshold the image. Automatic thresholding techniques offer more flexibility than simple thresholds based on fixed ranges.
Chapter 4 Performing Particle Analysis Removing Unwanted Particles Use CWIMAQVision.RejectBorder to remove particles that touch the border of the image. Reject particles on the border of the image when you suspect that the information about those particles is incomplete. Use CWIMAQVision.RemoveParticle to remove large or small particles that do not interest you. You also can use the Erode, Open, and POpen methods in CWIMAQVision.Morphology to remove small particles. Unlike CWIMAQVision.
Chapter 4 Performing Particle Analysis Improving Particle Shapes Use CWIMAQVision.FillHole to fill holes in the particles. Use CWIMAQVision.Morphology to perform a variety of operations on the particles. You can use the Open, Close, Proper Open, Proper Close, and auto-median operations to smooth the boundaries of the particles. Open and Proper Open Smooth the boundaries of the particle by removing small isthmuses, while close widens the isthmuses. Close and Proper Close fill small holes in the particle.
Chapter 4 Performing Particle Analysis Table 4-1. Measurement Types (Continued) Measurement Description cwimaqMeasurementAverageHorizSegmentLength Average length of a horizontal segment in the particle. cwimaqMeasurementAverageVertSegmentLength Average length of a vertical segment in the particle. cwimaqMeasurementBoundingRectBottom Y-coordinate of the lowest particle point. cwimaqMeasurementBoundingRectDiagonal Distance between opposite corners of the bounding rectangle.
Chapter 4 Performing Particle Analysis Table 4-1. Measurement Types (Continued) Measurement Description cwimaqMeasurementConvexHullPerimeter Perimeter of the smallest convex polygon containing all points in the particle. cwimaqMeasurementElongationFactor Max Feret Diameter divided by Equivalent Rect Short Side (Feret). cwimaqMeasurementEquivalentEllipseMajorAxis Length of the major axis of the ellipse with the same perimeter and area as the particle.
Chapter 4 Performing Particle Analysis Table 4-1. Measurement Types (Continued) Measurement Description cwimaqMeasurementHolesArea Sum of the areas of each hole in the particle. cwimaqMeasurementHolesPerimeter Sum of the perimeters of each hole in the particle. cwimaqMeasurementHuMoment1 The first Hu moment. cwimaqMeasurementHuMoment2 The second Hu moment. cwimaqMeasurementHuMoment3 The third Hu moment. cwimaqMeasurementHuMoment4 The fourth Hu moment.
Chapter 4 Performing Particle Analysis Table 4-1. Measurement Types (Continued) Measurement Description cwimaqMeasurementMaxFeretDiameterStartY Y-coordinate of the start of the line segment connecting the two perimeter points that are the furthest apart. cwimaqMeasurementMaxHorizSegmentLengthLeft X-coordinate of the leftmost pixel in the longest row of contiguous pixels in the particle.
Chapter 4 Performing Particle Analysis Table 4-1. Measurement Types (Continued) Measurement Description cwimaqMeasurementNormMomentOfInertiaXY The normalized moment of inertia in the X and Y directions. cwimaqMeasurementNormMomentOfInertiaXYY The normalized moment of inertia in the X direction once and the Y direction twice. cwimaqMeasurementNormMomentOfInertiaYY The normalized moment of inertia in the Y direction twice.
Chapter 4 Performing Particle Analysis Table 4-1. Measurement Types (Continued) Measurement Description cwimaqMeasurementSumXXY The sum of all X-coordinates squared times Y-coordinates in the particle. cwimaqMeasurementSumXY The sum of all X-coordinates times Y-coordinates in the particle. cwimaqMeasurementSumXYY The sum of all X-coordinates times Y-coordinates squared in the particle. cwimaqMeasurementSumY The sum of all Y-coordinates in the particle.
Performing Machine Vision Tasks 5 This chapter describes how to perform many common machine vision inspection tasks. The most common inspection tasks are detecting the presence or absence of parts in an image and measuring the dimensions of parts to see if they meet specifications. Measurements are based on characteristic features of the object represented in the image.
Chapter 5 Performing Machine Vision Tasks Locate Objects to Inspect Set Search Areas Find Measurement Points Identify Parts Under Inspection Classify Objects Read Read Characters Symbologies Convert Pixel Coordinates to Real-World Coordinates Make Measurements Display Results Figure 5-1. Steps to Performing Machine Vision Note Diagram items enclosed with dashed lines are optional steps.
Chapter 5 Performing Machine Vision Tasks to as the measurement coordinate system. The measurement methods automatically move the ROIs to the correct position using the position of the measurement coordinate system with respect to the reference coordinate system. For information about coordinate systems, refer to Chapter 13, Dimensional Measurements, of the IMAQ Vision Concepts Manual. You can build a coordinate transformation using edge detection or pattern matching.
Chapter 5 Performing Machine Vision Tasks 1 1 2 4 2 3 4 3 1 2 a. b. Search Area for the Coordinate System Object Edges 3 4 Origin of the Coordinate System Measurement Area Figure 5-2. Coordinate Systems of a Reference Image and Inspection Image b. If you use CWMachineVision.FindCoordTransformUsingTwoRects, specify two rectangular ROIs, each containing one separate, straight boundary of the object, as shown in Figure 5-3. The boundaries cannot be parallel.
Chapter 5 Performing Machine Vision Tasks 4 2 4 2 3 3 1 1 b. a. 1 2 Primary Search Area Secondary Search Area 3 4 Origin of the Coordinate System Measurement Area Figure 5-3. Locating Coordinate System Axes with Two Search Areas 2. Choose the parameters you need to locate the edges on the object. 3. Choose the coordinate system axis direction. 4. Choose the results that you want to overlay onto the image. 5. Choose the mode for the method.
Chapter 5 Performing Machine Vision Tasks 1. Define a template that represents the part of the object that you want to use as a reference feature. For more information about defining a template, refer to the Find Measurement Points section. 2. Define a rectangular search area in which you expect to find the template. 3. Set the MatchMode property of the CWMVFindCTUsingPatternOptions object to cwimaqRotationInvariant when you expect the template to appear rotated in the inspection images.
Chapter 5 Performing Machine Vision Tasks Choosing a Method to Build the Coordinate Transformation Figure 5-4 guides you through choosing the best method for building a coordinate transformation for the application. Start Object positioning accuracy better than ±65 degrees. No Yes The object under inspection has a straight, distinct edge (main axis). No Yes The object contains a second distinct edge not parallel to the main axis in the same search area.
Chapter 5 Performing Machine Vision Tasks Set Search Areas Select ROIs in the images to limit the areas in which you perform the processing and inspection. You can define ROIs interactively or programmatically. Defining Regions Interactively Follow these steps to interactively define an ROI: 1. Call CWMachineVision.SetupViewerForSelection. The following values are available: Annulus, Line, Point, Rectangle, and RotatedRect.
Chapter 5 Performing Machine Vision Tasks Table 5-1. ROI Selection Methods to Use with CWMachineVision Methods (Continued) CWMachineVision ROI Selection Methods SetupViewerForPointSelection CWMachineVision Method LightMeterPoint GetSelectedPointFromViewer SetupViewerForLineSelection LightMeterLine GetSelectedLineFromViewer Defining Regions Programmatically When you have an automated application, you need to define regions of interest programmatically.
Chapter 5 Performing Machine Vision Tasks Finding Lines or Circles If you want to find points along the edge of an object and find a line describing the edge, use CWMachineVision.FindStraightEdge and CWMachineVision.FindConcentricEdge. CWMachineVision.FindStraightEdge finds edges based on rectangular search areas, as shown in Figure 5-5. CWMachineVision.FindConcentricEdge finds edges based on annular search areas.
Chapter 5 Performing Machine Vision Tasks 1 4 3 2 1 2 Annular Search Region Search Lines 3 4 Detected Edge Points Circle Fit To Edge Points Figure 5-6. Finding a Circular Feature These methods locate the intersection points between a set of search lines in the search region and the edge of an object. Specify the separation between the lines that the methods use to detect edges. The methods determine the intersection points based on their contrast, width, and steepness.
Chapter 5 Performing Machine Vision Tasks Finding Edge Points Along Multiple Search Contours Use the CWIMAQVision.Rake, CWIMAQVision.Spoke, and CWIMAQVision.ConcentricRake methods to find edge points along multiple search contours. These methods behave like CWIMAQVision.FindEdges2, but they find edges on multiple contours. These methods find only the first edge that meets the criteria along each contour. Pass in a CWIMAQRegions object to define the search region for these methods. CWIMAQVision.
Chapter 5 Performing Machine Vision Tasks 3. Define an image or an area of an image as the search area. A small search area reduces the time to find the features. 4. Set the tolerances and parameters to specify how the algorithm operates at run time using CWIMAQMatchPatternOptions. 5. Test the search algorithm on test images using CWIMAQVision.MatchPattern2. 6. Verify the results using a ranking method.
Chapter 5 Performing Machine Vision Tasks Feature Detail A template with relatively coarse features is less sensitive to variations in size and rotation than a model with fine features. However, the model must contain enough detail to identify it. a. a Good Feature Detail b. b Ambiguous Feature Detail Figure 5-8. Feature Detail Positional Information A template with strong edges in both the x and y directions is easier to locate. a. a b b.
Chapter 5 Performing Machine Vision Tasks Background Information Unique background information in a template improves search performance and accuracy. a. a b b. Pattern with Insufficient Background Information Pattern with Sufficient Background Information Figure 5-10. Background Information Training the Pattern Matching Algorithm After you create a good template image, the pattern matching algorithm has to learn the important features of the template. Use CWIMAQVision.
Chapter 5 Performing Machine Vision Tasks Defining a Search Area Two equally important factors define the success of a pattern matching algorithm: accuracy and speed. You can define a search area to reduce ambiguity in the search process. For example, if the image has multiple instances of a pattern and only one of them is required for the inspection task, the presence of additional instances of the pattern can produce incorrect results.
Chapter 5 Performing Machine Vision Tasks a. b. c. d. Figure 5-11. Selecting a Search Area for Grayscale Pattern Matching Setting Matching Parameters and Tolerances Every pattern matching algorithm makes assumptions about the images and pattern matching parameters used in machine vision applications. These assumptions work for a high percentage of the applications. However, there may be applications in which the assumptions used in the algorithm are not optimal.
Chapter 5 Performing Machine Vision Tasks Minimum Contrast Contrast is the difference between the smallest and largest pixel values in a region. You can set the minimum contrast to potentially increase the speed of the pattern matching algorithm. The pattern matching algorithm ignores all image regions where contrast values fall beneath a set minimum contrast value. If the search image has high contrast but contains some low contrast regions, you can set a high minimum contrast value.
Chapter 5 Performing Machine Vision Tasks Using a Ranking Method to Verify Results The manner in which you interpret the pattern matching results depends on the application. For typical alignment applications, such as finding a fiducial on a wafer, the most important information is the position and bounding rectangle of the best match. Use CWIMAQPatternMatchReportItem.Position and CWIMAQPatternMatchReportItem.BoundingPoints to get the position and location of a match.
Chapter 5 Performing Machine Vision Tasks 5. 6. Set the tolerances and parameters to specify how the algorithm operates at run time using CWIMAQMatchColorPatternOptions. Test the search algorithm on test images using CWIMAQVision.MatchColorPattern. 7. Verify the results using a ranking method. Defining and Creating Effective Color Template Images The selection of a effective template image plays a critical part in obtaining accurate results with the color pattern matching algorithm.
Chapter 5 Performing Machine Vision Tasks Background Information Unique background information in a template improves search performance and accuracy during the grayscale pattern matching phase. This requirement could conflict with the “color information” requirement because background colors may not be appropriate during the color location phase.
Chapter 5 Performing Machine Vision Tasks Defining a Search Area Two equally important factors define the success of a color pattern matching algorithm—accuracy and speed. You can define a search area to reduce ambiguity in the search process. For example, if the image has multiple instances of a pattern and only one instance is required for the inspection task, the presence of additional instances of the pattern can produce incorrect results.
Chapter 5 Performing Machine Vision Tasks The time required to locate a pattern in an image depends on both the template size and the search area. By reducing the search area or increasing the template size, you can reduce the required search time. Increasing the size of the template can improve search time, but doing so reduces match accuracy if the larger template includes an excess of background information.
Chapter 5 Performing Machine Vision Tasks Use one of the following four search strategies: • Very aggressive—Uses the largest step size, the most sub-sampling and only the dominant color from the template to search for the template. Use this strategy when the color in the template is almost uniform, the template is well contrasted from the background and there is a good amount of separation between different occurrences of the template in the image.
Chapter 5 Performing Machine Vision Tasks Minimum Contrast Use the minimum contrast to increase the speed of the color pattern matching algorithm. The color pattern matching algorithm ignores all image regions where grayscale contrast values fall beneath a set minimum contrast value. Use CWIMAQMatchColorPatternMatchingOptions.MinimumContrast to set the minimum contrast. Refer to the Setting Matching Parameters and Tolerances section of this chapter for more information about minimum contrast.
Chapter 5 Performing Machine Vision Tasks • Does not always require the location with sub-pixel accuracy • Does not require shape information for the region Complete the following steps to find features in an image using color location: 1. 2. Define a reference pattern in the form of a template image. Use the reference pattern to train the color location algorithm with CWIMAQVision.LearnColorPattern. 3. Define an image or an area of an image as the search area.
Chapter 5 Performing Machine Vision Tasks the rake method, and then they compute the distance between the points detected on the edges along each search line of the rake and return the largest or smallest distance in either the horizontal or vertical direction. The MeasurementAxis parameter specifies the axis along which to measure. You also need to specify the parameters for edge detection and the separation between the search lines that you want to use within the search region to find the edges.
Chapter 5 Performing Machine Vision Tasks • FindMidLine—Finds the line that is midway between a point and a line and is parallel to the line. • FindPolygonArea—Calculates the area of a polygon specified by its vertex points. Instrument Reader Measurements You can make measurements based on the values obtained by meter, LCD, and barcode readers. Use CWIMAQMeterArc.CreateFromPoints or CWIMAQMeterArc.CreateFromLines to calibrate a meter or gauge that you want to read. CWIMAQMeterArc.
Chapter 5 Performing Machine Vision Tasks Before you classify objects, you must create a classifier file with samples of the objects using the NI Classification Training Interface. Go to Start» Programs»National Instruments»Classification Training to launch the NI Classification Training Interface. After you have trained samples of the objects you want to classify, use the following methods to classify the image under inspection: • Use CWIMAQVision.
Chapter 5 Performing Machine Vision Tasks types: Codabar, Code 39, Code 93, Code 128, EAN 8, EAN 13, Interleaved 2 of 5, MSI, and UPCA. Read Data Matrix Barcode Use CWIMAQVision.ReadDataMatrixBarcode to read values encoded in a Data Matrix barcode. This method can automatically determine the location of the barcode and appropriate search options for the application. However, you can improve the performance of the application by specifying control values specific to the application. CWIMAQVision.
Chapter 5 Performing Machine Vision Tasks By default, CWIMAQVision.ReadDataMatrixBarcode automatically detects the type of barcode to read. You can improve the performance of the function by specifying the type of barcode in the application. IMAQ Vision supports Data Matrix types ECC 000 to ECC 140, and ECC 200. Read PDF417 Barcode Use CWIMAQVision.ReadPDF417Barcode to read values encoded in a PDF417 barcode. By default, CWIMAQVision.
Chapter 5 Performing Machine Vision Tasks • DrawRectangle—Overlays a CWIMAQRectangle object on an image. • DrawOval—Overlays a CWIMAQOval object on an image. • DrawArc—Overlays a CWIMAQArc object on an image. • DrawPicture—Overlays a picture object onto the image. • DrawText—Overlays text on an image. • DrawRegions—Overlays an ROI described by the CWIMAQRegions object on an image. You can select the color of overlays by using one of these methods.
Chapter 5 Performing Machine Vision Tasks to True. With CWMachineVision.FindPattern, you can overlay the search area and the result. Use CWIMAQOverlay.Clear to clear any previous overlay information from the image. Use CWIMAQVision.WriteImageAndVisionInfo to save an image with its overlay information to a file. You can read the information from the file into an image using the CWIMAQVision.ReadImageAndVisionInfo.
6 Calibrating Images This chapter describes how to calibrate the imaging system, save calibration information, and attach calibration information to an image. After you set up the imaging system, you may want to calibrate the system. If the imaging setup is such that the camera axis is perpendicular or nearly perpendicular to the object under inspection and the lens has no distortion, use simple calibration. With simple calibration, you do not need to learn a template.
Chapter 6 Calibrating Images Refer to Chapter 5, Performing Machine Vision Tasks, for more information about applying calibration information before making measurements. Defining a Calibration Template You can define a calibration template by supplying an image of a grid or providing a list of pixel coordinates and their corresponding real-world coordinates. This section discusses the grid method in detail. A calibration template is a user-defined grid of circular dots.
Chapter 6 Calibrating Images Defining a Reference Coordinate System To express measurements in real-world units, you must define a coordinate system in the image of the grid. Use CWIMAQLearnCalibrationOptions.CalibrationAxisInfo to define a coordinate system by its origin, angle, and axis direction. The origin, expressed in pixels, defines the center of the coordinate system.
Chapter 6 Calibrating Images x 1 2 x y y b. a. 1 Origin of a Calibration Grid in the Real World 2 Origin of the Same Calibration Grid in an Image Figure 6-3. A Calibration Grid and an Image of the Grid Note If you specify a list of points instead of a grid for the calibration process, the software defines a default coordinate system, as follows: 1. The origin is placed at the point in the list with the lowest x-coordinate value and then the lowest y-coordinate value. 2.
Chapter 6 Calibrating Images x 1 x' x 2 y' y y 1 Default Origin in a Calibration Grid Image 2 User-Defined Origin Figure 6-4. Defining a Coordinate System Learning Calibration Information After you define a calibration grid and reference axis, acquire an image of the grid using the current imaging setup. For information about acquiring images, refer to the Acquire or Read an Image section of Chapter 2, Getting Measurement-Ready Images. The grid does not need to occupy the entire image.
Chapter 6 Calibrating Images Specifying Scaling Factors Scaling factors are the real-world distances between the dots in the calibration grid in the x and y directions and the units in which the distances are measured. Use CWIMAQCalibrationGridOptions.GridDescriptor to specify the scaling factors. Choosing a Region of Interest Define a learning ROI during the learning process to define a region of the calibration grid you want to learn.
Chapter 6 Calibrating Images Choose the perspective projection algorithm when the system exhibits perspective errors only. A perspective projection calibration has an accurate transformation even in areas not covered by the calibration grid, as shown in Figure 6-6. Set CWIMAQLearnCalibrationOptions.CalibrationMethod to cwimaqPerspectiveCalibration to choose the perspective calibration algorithm. Learning and applying perspective projection is less computationally intensive than the nonlinear method.
Chapter 6 Calibrating Images Note A high score does not reflect the accuracy of the system. If the learning process returns a learning score below 600, try the following: 1. Make sure the grid complies with the guidelines listed in the Defining a Calibration Template section. 2. Check the lighting conditions. If you have too much or too little lighting, the software may estimate the center of the dots incorrectly. Also, adjust the threshold range to distinguish the dots from the background. 3.
Chapter 6 Calibrating Images Calibration Invalidation Any image processing operation that changes the image size or orientation voids the calibration information in a calibrated image. Examples of methods that void calibration information include CWIMAQVision.Resample2, CWIMAQVision.Extract2, CWIMAQVision.Unwrap, and CWIMAQImage.ArrayToImage. Simple Calibration When the axis of the camera is perpendicular to the image plane and lens distortion is negligible, use simple calibration.
Chapter 6 Calibrating Images Y X dy dx 1 1 Origin Figure 6-7. Defining a Simple Calibration Save Calibration Information After you learn the calibration information, you can save it so that you do not have to relearn the information for subsequent processing. Use CWIMAQVision.WriteImageAndVisionInfo to save the image of the grid and its associated calibration information to a file. To read the file containing the calibration information use CWIMAQVision.ReadImageAndVisionInfo.
Chapter 6 Calibrating Images CWIMAQVision.ConvertPixelToRealWorldCoordinates. If the application requires shape measurements, correct the image by removing distortion with CWIMAQVision.CorrectCalibratedImage. Note Correcting images is a time-intensive operation. A calibrated image is different from a corrected image. Note Because calibration information is part of the image, it is propagated throughout the processing and analysis of the image.
Technical Support and Professional Services A Visit the following sections of the National Instruments Web site at ni.com for technical support and professional services: • Support—Online technical support resources at ni.
Glossary Numbers 1D One-dimensional. 2D Two-dimensional. 3D Three-dimensional. A AIPD The National Instruments internal image file format used for saving complex images and calibration information associated with an image (extension APD). alignment The process by which a machine vision application determines the location, orientation, and scale of a part being inspected. alpha channel The channel used to code extra information, such as gamma correction, about a color image.
Glossary barycenter The grayscale value representing the centroid of the range of an image’s grayscale values in the image histogram. binary image An image in which the objects usually have a pixel intensity of 1 (or 255) and the background has a pixel intensity of 0. binary morphology Functions that perform morphological operations on a binary image.
Glossary C caliper (1) A function in the NI Vision Assistant and in NI Vision Builder for Automated Inspection that calculates distances, angles, circular fits, and the center of mass based on positions given by edge detection, particle analysis, centroid, and search functions. (2) A measurement function that finds edge pairs along a specified path in the image.
Glossary connectivity-4 Only pixels adjacent in the horizontal and vertical directions are considered neighbors. connectivity-8 All adjacent pixels are considered neighbors. contrast A constant multiplication factor applied to the luma and chroma components of a color pixel in the color decoding process. convex hull The smallest convex polygon that can encapsulate a particle. convex hull function Computes the convex hull of objects in a binary image. convolution See linear filter.
Glossary edge steepness The number of pixels that corresponds to the slope or transition area of an edge. energy center The center of mass of a grayscale image. See center of mass. equalize function See histogram equalization. erosion Reduces the size of an object along its boundary and eliminates isolated points in the image. exponential and gamma corrections Expand the high gray-level information in an image while suppressing low gray-level information.
Glossary gradient filter An edge detection algorithm that extracts the contours in gray-level values. Gradient filters include the Prewitt and Sobel filters. gray level The brightness of a pixel in an image. gray-level dilation Increases the brightness of pixels in an image that are surrounded by other pixels with a higher intensity. gray-level erosion Reduces the brightness of pixels in an image that are surrounded by other pixels with a lower intensity.
Glossary hit-miss function Locates objects in the image similar to the pattern defined in the structuring element. HSI A color encoding scheme in hue, saturation, and intensity. HSL A color encoding scheme using hue, saturation, and luminance information where each image in the pixel is encoded using 32 bits: 8 bits for hue, 8 bits for saturation, 8 bits for luminance, and 8 unused bits. HSV A color encoding scheme in hue, saturation, and value. hue Represents the dominant color of a pixel.
Glossary image enhancement The process of improving the quality of an image that you acquire from a sensor in terms of signal-to-noise ratio, image contrast, edge definition, and so on. image file A file containing pixel data and additional information about the image. image format Defines how an image is stored in a file. Usually composed of a header followed by the pixel data. image mask A binary image that isolates parts of a source image for further processing.
Glossary intensity calibration Assigns user-defined quantities such as optical densities or concentrations to the gray-level values in an image. intensity profile The gray-level distribution of the pixels along an ROI in an image. intensity range Defines the range of gray-level values in an object of an image. intensity threshold Characterizes an object based on the range of gray-level values in the object.
Glossary linear filter A special algorithm that calculates the value of a pixel based on its own pixel value as well as the pixel values of its neighbors. The sum of this calculation is divided by the sum of the elements in the matrix to obtain a new pixel value. logarithmic function Increases the brightness and contrast in dark regions of an image and decreases the contrast in bright regions of the image.
Glossary M M (1) Mega, the standard metric prefix for 1 million or 106, when used with units of measure such as volts and hertz. (2) Mega, the prefix for 1,048,576, or 220, when used with B to quantify data or computer memory. machine vision An automated application that performs a set of visual inspection tasks. mask FFT filter Removes frequencies contained in a mask (range) specified by the user.
Glossary NI-IMAQ The driver software for National Instruments IMAQ hardware. nonlinear filter Replaces each pixel value with a nonlinear function of its surrounding pixels. nonlinear gradient filter A highpass edge-extraction filter that favors vertical edges. nonlinear Prewitt filter A highpass, edge-extraction filter based on two-dimensional gradient information. nonlinear Sobel filter A highpass, edge-extraction filter based on two-dimensional gradient information.
Glossary optical representation Contains the low-frequency information at the center and the highfrequency information at the corners of an FFT-transformed image. outer gradient Finds the outer boundary of objects. P palette The gradation of colors used to display an image on screen, usually defined by a CLUT. particle A connected region or grouping of non-zero pixels in a binary image.
Glossary proper-closing A finite combination of successive closing and opening operations that you can use to fill small holes and smooth the boundaries of objects. proper-opening A finite combination of successive opening and closing operations that you can use to remove small particles and smooth the boundaries of objects. Q quantitative analysis Obtaining various measurements of objects in an image.
Glossary ROI Region of interest. (1) An area of the image that is graphically selected from a window displaying the image. This area can be used focus further processing. (2) A hardware-programmable rectangular portion of the acquisition window. ROI tools A collection of tools that enable you to select a region of interest from an image. These tools let you select points, lines, annuli, polygons, rectangles, rotated rectangles, ovals, and freehand open and closed contours.
Glossary spatial filters Alter the intensity of a pixel relative to variations in intensities of its neighboring pixels. You can use these filters for edge detection, image enhancement, noise reduction, smoothing, and so forth. spatial resolution The number of pixels in an image, in terms of the number of rows and columns in the image. square function See exponential function. square root function See logarithmic function.
Glossary V value The grayscale intensity of a color pixel computed as the average of the maximum and minimum red, green, and blue values of that pixel.
Index Numerics building coordinate transformation with edge detection, 5-3 coordinate transformation with pattern matching, 5-5 building coordinate transformations, 5-7 choosing a method, 5-7 1D barcodes, 5-29 reading, 5-29 A acquiring images, 2-4 continuous acquisition, 2-5 one-shot acquisition, 2-4 Acquisition Type combo box, 2-4 ActiveX objects, 1-5 adding shapes to ROIs, 3-5 analyzing images, 2-7, 2-8 Annulus tool, 3-2 Application, 1-6 application development general steps, 1-6 inspection steps, 1-7
Index creating binary images, 4-1, 4-2 images, 2-2 IMAQ Vision applications, 1-5 template images, 5-13 CWIMAQ control, 1-3 cwimaq.ocx, 1-3 CWIMAQViewer control, 1-3 CWIMAQVision, 1-3 CWMachineVision control, 1-4 CWMachineVision methods, 5-8 cwmv.
Index Freeline tool, 3-3 documentation conventions used in manual, ix NI resources, A-1 related documentation, x drivers NI resources, A-1 NI-IMAQ, xi G geometrical measurements, 5-27 granularity specifying requirements for learning a color, 3-12 using color sensitivity to control, 5-23 grayscale features, filtering, 2-10 grayscale morphology, filtering grayscale features, 2-10 grayscale statistics, measuring, 3-6 E edge detection, 5-3 finding features, 5-9 edge points, finding along multiple search con
Index light intensity, measuring, 3-6 lighting effects on image colors, 3-11 Line tool, 3-2 lines, finding, 5-10 locating objects to detect, 5-2 lowpass attenuation, 2-12 filter, 2-9 LUTs, 2-9 highlighting details in images, 2-9 imaging systems, setting up, 2-1 IMAQ Vision applications, creating, 1-5 improving binary images, 4-2 images, 2-9 particle shapes, 4-4 increasing speed of the color pattern matching algorithm, 5-25 speed of the pattern matching algorithm, 5-18 instrument, A-1 instrument drivers, x
Index O points finding along one search contour, 5-11 finding along the edge of a circle, 5-10 finding measurement points, 5-9 finding with color location, 5-25 finding with color pattern matching, 5-19 finding with pattern matching, 5-12 Polygon tool, 3-3 programmatically defining regions, 5-9 regions of interest, 3-5 programming examples (NI resources), A-1 objects classifying, 5-29 detecting, 5-2 locating, 5-2 OCR, 5-29 one-shot acquisition, 2-4 optimizing speed of the color pattern matching algorithm
Index specifying color information, 3-10 granularity to learn a color, 3-12 learning algorithm, 6-6 region of interest, 6-6 scaling factors, 6-6 specifying scaling factors, learning calibration information, 6-6 speed increasing for color pattern matching algorithms, 5-25 increasing for pattern matching algorithms, 5-18 support, technical, A-1 symmetric templates, 5-13 ROIs adding shapes, 3-5 programmatically defining, 3-5 Rotated Rectangle tool, 3-2 rotation angle ranges setting for color pattern matching
Index testing search algorithms, 5-18, 5-25 tolerances, setting for pattern matching, 5-17 touching particles, separating, 4-3 training characters, 5-29 color pattern matching algorithms, 5-21 pattern matching algorithm, 5-15 training and certification (NI resources), A-1 troubleshooting (NI resources), A-1 V U Z using learning scores, 6-7 ranking to verify pattern matching results, 5-19 Zoom tool, 3-3 © National Instruments Corporation viewing color differences in an image using multiple regions, 3