Basler runner USER’S MANUAL FOR GigE VISION CAMERAS Document Number: AW000493 Version: 13 Language: 000 (English) Release Date: 07 February 2013 PRELIMINARY
For customers in the U.S.A. This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications.
Contacting Basler Support Worldwide Europe: Basler AG An der Strusbek 60 - 62 22926 Ahrensburg Germany Tel.: +49-4102-463-515 Fax.: +49-4102-463-599 support.europe@baslerweb.com Americas: Basler, Inc. 855 Springdale Drive, Suite 203 Exton, PA 19341 U.S.A. Tel.: +1-610-280-0171 Fax.: +1-610-280-7608 support.usa@baslerweb.com Asia: Basler Asia Pte. Ltd. 8 Boon Lay Way # 03 - 03 Tradehub 21 Singapore 609964 Tel.: +65-6425-0472 Fax.: +65-6425-0473 support.asia@baslerweb.com www.baslerweb.
AW00049313000 Table of Contents Table of Contents 1 Specifications, Requirements, and Precautions . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 General Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Spectral Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table of Contents AW00049313000 7.2 Connector Pin Assignments and Numbering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.1 Pin Assignments for the 12-Pin Receptacle. . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.2 Pin Assignments for the 6-Pin Receptacle. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.3 Pin Assignments for the RJ-45 Jack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.2.4 Pin Numbering . . . . . . . . . . . . . . . . . . . . .
AW00049313000 8.2.4 8.2.5 8.2.6 Table of Contents Line Start Triggering. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4.1 Line Start Trigger Mode = Off . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4.2 Line Start Trigger Mode = On . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.2.4.3 Setting the Line Start Trigger Parameters . . . . . . . . . . . . . . . . . . . . Exposure Time . . . . . . . . . . . . . . . . . . . . . . . . . .
Table of Contents AW00049313000 11 Standard Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 11.1 Gain and Black Level on Monochrome Cameras . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 11.1.1 Gain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169 11.1.2 Black Level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
AW00049313000 Table of Contents Revision History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231 Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table of Contents vi AW00049313000 Basler runner
AW00049313000 Specifications, Requirements, and Precautions 1 Specifications, Requirements, and Precautions This chapter lists the camera models covered by the manual. It provides the general specifications for those models and the basic requirements for using them. This chapter also includes specific precautions that you should keep in mind when using the cameras. We strongly recommend that you read and follow the precautions. 1.
Specifications, Requirements, and Precautions AW00049313000 1.2 General Specifications Specification ruL1024-19gm ruL1024-36gm Sensor Size 1024 pixels Sensor Type Thompson TH7813A Linear CCD Pixel Size 10.0 µm x 10.0 µm Fill Factor 100% Max Line Rate 18.7 kHz Min Line Rate No minimum when an external line trigger signal is used 35.7 kHz ruL1024-57gm 56.
AW00049313000 Specifications, Requirements, and Precautions Specification ruL2048-10gm ruL2048-19gm Sensor Size 2048 pixels Sensor Type Thompson TH7814A Linear CCD Pixel Size 10.0 µm x 10.0 µm Fill Factor 100% Max Line Rate 9.7 kHz Min Line Rate No minimum when an external line trigger signal is used 18.7 kHz ruL2048-30gm 29.
Specifications, Requirements, and Precautions Specification ruL2098-10gc Sensor Size 2098 pixels x 3 lines Sensor Type Kodak KLI-2113 Tri-linear CCD Pixel Size 14.0 µm x 14.0 µm Center-to-center Spacing Between Lines 112 µm Fill Factor 100% Max Line Rate 9.
AW00049313000 Specifications, Requirements, and Precautions 1.3 Spectral Response 1.3.1 Monochrome Cameras The following graph shows the spectral response for monochrome cameras. 15 Responsivity (V/µJ/cm2) Q % 80 E= = QE % 70 = QE 60% 10 5 0 300 400 500 600 700 800 900 1000 Wave Length (nm) Fig.
Specifications, Requirements, and Precautions 1.3.2 AW00049313000 Color Cameras The following graph shows the spectral response for color cameras. 60 Responsivity (V/µJ/cm2) 50 40 30 20 10 0 350 400 450 500 550 600 Wave Length (nm) 650 700 750 800 850 = red = green = blue Fig.
AW00049313000 Specifications, Requirements, and Precautions 1.4 Mechanical Specifications 1.4.1 Camera Dimensions and Mounting Points The cameras are manufactured with high precision. Planar, parallel, and angular sides guarantee precise mounting with high repeatability. The camera housings conform to the IP30 protection class provided the lens mount is covered by a lens or by the cap that is shipped with the camera. The camera’s dimensions in millimeters are as shown in the drawings below.
Specifications, Requirements, and Precautions 1.4.2 AW00049313000 Sensor Positioning Accuracy The sensor horizontal, vertical and rotational positioning accuracies are as shown in Figure 4 for monochrome cameras and as shown in Figure 5 for color cameras. ± 1° ± 0.1° Since the translatory and rotational positioning tolerances depend on each other, the worst case of maximum rotational and horizontal/vertical mis-positioning cannot occur at the same time.
Specifications, Requirements, and Precautions 31 ± 0.2 ± 0.3° ± 0.1° AW00049313000 31 ± 0.2 = Reference plane Drawings not to scale Tolerances are typical Fig.
Specifications, Requirements, and Precautions 1.4.3 AW00049313000 Lens Adapter Dimensions F-mount Adapter F-mount Adapter on a runner 31.5 -0.2 ø48 h7 46.5 Photosensitive Surface of the Sensor ø48 h7 0 -0.03 48.00 47.98 Fig. 6: F-mount Adapter Dimensions C-mount Adapter C-mount Adapter on a runner ø48 h7 17.5 Photosensitive Surface of the Sensor 0.5 0 2.5 -0.1 ø48 h7 0 -0.025 48.000 47.975 Fig.
AW00049313000 Specifications, Requirements, and Precautions 1.5 Avoiding EMI and ESD Problems The cameras are frequently installed in industrial environments. These environments often include devices that generate electromagnetic interference (EMI) and they are prone to electrostatic discharge (ESD). Excessive EMI and ESD can cause problems with your camera such as false triggering or can cause the camera to suddenly stop capturing images.
Specifications, Requirements, and Precautions AW00049313000 1.6 Environmental Requirements 1.6.1 Temperature and Humidity Housing temperature during operation: 0 °C ... +50 °C (+32 °F ... +122 °F) Humidity during operation: 20 % ... 80 %, relative, non-condensing Storage temperature: -20 °C ... +80 °C (-4 °F ... +176 °F) Storage humidity: 20 % ... 80 %, relative, non-condensing 1.6.
AW00049313000 Specifications, Requirements, and Precautions 1.7 Precautions Avoid Dust on the Sensor CAUTION The camera is shipped with a cap on the lens mount. To avoid collecting dust on the camera’s sensor, make sure that you always put the cap in place when there is no lens mounted on the camera. Applying Incorrect Power Can Damage the Camera CAUTION The camera’s nominal operating voltage is +12 VDC (± 10%). If the voltage applied to the camera is greater than +13.
Specifications, Requirements, and Precautions AW00049313000 Warranty Precautions To ensure that your warranty remains in force: Do not remove the camera’s serial number label If the label is removed and the serial number can’t be read from the camera’s registers, the warranty is void. Do not open the camera housing Do not open the housing. Touching internal components may damage them.
AW00049313000 Software and Hardware Installation 2 Software and Hardware Installation The information you will need to install and operate the camera is included in the Installation and Setup Guide for Cameras Used with Basler’s pylon API (AW000611xx000). You can download the Installation and Setup Guide for Cameras Used with Basler’s pylon API from the Basler website: www.baslerweb.com The guide includes the information you will need to install both hardware and software and to begin capturing images.
Software and Hardware Installation 16 AW00049313000 Basler runner
AW00049313000 Tools for Changing Camera Parameters 3 Tools for Changing Camera Parameters This chapter explains the options available for changing the camera’s parameters. The available options let you change parameters either by using stand-alone tools that access the camera via a GUI or by accessing the camera from within your software application. 3.
Tools for Changing Camera Parameters AW00049313000 3.3 The pylon API You can access all of the camera’s parameters and can control the camera’s full functionality from within your application software by using Basler’s pylon API. The Basler pylon Programmer’s Guide and API Reference contains an introduction to the API and includes information about all of the methods and objects included in the API. The programmer’s guide and API reference are included in the pylon SDK.
AW00049313000 Basler Network Drivers and Parameters 4 Basler Network Drivers and Parameters This section describes the Basler network drivers available for your camera and provides detailed information about the parameters associated with the drivers. Two network drivers are available for the network adapter used with your GigE cameras: The Basler filter driver is a basic GigE Vision network driver that is compatible with all network adapters.
Basler Network Drivers and Parameters AW00049313000 4.1 The Basler Filter Driver The Basler filter driver is a basic driver GigE Vision network driver. It is designed to be compatible with most network adapter cards. The functionality of the filter driver is relatively simple. For each frame, the driver checks the order of the incoming packets.
AW00049313000 Basler Network Drivers and Parameters 4.2 The Basler Performance Driver The Basler performance driver is a hardware specific GigE Vision network driver compatible with network adapters that use specific Intel chipsets. The main advantage of the performance driver is that it significantly lowers the CPU load needed to service the network traffic between the PC and the camera(s). It also has a more robust packet resend mechanism.
Basler Network Drivers and Parameters AW00049313000 General Parameters Enable Resend - Enables the packet resend mechanisms. If the Enable Resend parameter is set to false, the resend mechanisms are disabled. The performance driver will not check for missing packets and will not send resend requests to the camera. If the Enable Resend parameter is set to true, the resend mechanisms are enabled. The performance driver will check for missing packets.
AW00049313000 Basler Network Drivers and Parameters Resend Request Threshold - This parameter determines the location of the resend request threshold within the receive window as shown in Figure 8. The parameter value is in per cent of the width of the receive window. In Figure 8 the resend request threshold is set at 33.33% of the width of the receive window. A stream of packets advances packet by packet beyond the resend request threshold (i.e. to the left of the resend request threshold in Figure 8).
Basler Network Drivers and Parameters AW00049313000 Timeout Resend Mechanism Parameters The timeout resend mechanism is illustrated in Figure 9 where the following assumptions are made: The frame includes 3000 packets. Packet 1002 is missing within the stream of packets and has not been recovered. Packets 2999 and 3000 are missing at the end of the stream of packets (end of the frame). The Maximum Number Resend Requests parameter is set to 3.
AW00049313000 Basler Network Drivers and Parameters Maximum Number Resend Requests - The Maximum Number Resend Requests parameter sets the maximum number of resend requests the performance driver will send to the camera for each missing packet. Resend Timeout - The Resend Timeout parameter defines how long (in milliseconds) the performance driver will wait after detecting that a packet is missing before sending a resend request to the camera.
Basler Network Drivers and Parameters AW00049313000 Threshold and Timeout Resend Mechanisms Combined Figure 10 illustrates the combined action of the threshold and the timeout resend mechanisms where the following assumptions are made: All parameters set to default. The frame includes 3000 packets. Packet 1002 is missing within the stream of packets and has not been recovered. Packets 2999 and 3000 are missing at the end of the stream of packets (end of the frame).
AW00049313000 Basler Network Drivers and Parameters (9) Because the maximum number of resend requests has been sent and the last Resend Response Timeout interval has expired, packet 1002 is now considered as lost. (10) End of the frame. (11) Missing packets at the end of the frame (2999 and 3000). (12) Interval defined by the Packet Timeout parameter. You can set the performance driver parameter values from within your application software by using the Basler pylon API.
Basler Network Drivers and Parameters AW00049313000 Adapter Properties When the Basler Performance driver is installed, it adds a set of "advanced" properties to the network adapter. These properties include: Max Packet Latency - A value in microseconds that defines how long the adapter will wait after it receives a packet before it generates a packet received interrupt. Max Receive Inter-packet Delay - A value in microseconds that defines the maximum amount of time allowed between incoming packets.
AW00049313000 Basler Network Drivers and Parameters 4.3 Transport Layer Parameters The transport layer parameters are part of the camera’s basic GigE implementation. These parameters do not normally require adjustment. Read Timeout - If a register read request is sent to the camera via the transport layer, this parameter designates the time out (in milliseconds) within which a response must be received.
Basler Network Drivers and Parameters 30 AW00049313000 Basler runner
AW00049313000 Network Related Camera Parameters and Managing Bandwidth 5 Network Related Camera Parameters and Managing Bandwidth This section describes the camera parameters that are related to the camera’s performance on the network. It also describes how to use the parameters to manage the available network bandwidth when you are using multiple cameras. 5.
Network Related Camera Parameters and Managing Bandwidth AW00049313000 The packet size parameter should always be set to the maximum size that your network adapter and network switches (if used) can handle. Inter-packet Delay (read/write) Sets the delay in ticks between the packets sent by the camera. Applies to the selected stream channel. Increasing the inter-packet delay will decrease the camera’s effective data transmission rate and will thus decrease the network bandwidth used by the camera.
AW00049313000 Network Related Camera Parameters and Managing Bandwidth 5.2 Managing Bandwidth When Multiple Cameras Share a Single Network Path If you are using a single camera on a GigE network, the problem of managing bandwidth is simple. The network can easily handle the bandwidth needs of a single camera and no intervention is required. A more complicated situation arises if you have multiple cameras connected to a single network adapter as shown in Figure 11.
Network Related Camera Parameters and Managing Bandwidth AW00049313000 transmission rate will be. After you have adjusted the Inter-packet Delay parameter on each camera, you can check the sum of the Bandwidth Assigned parameter values and see if the sum is now less than 125 MByte/s. 5.2.1 A Procedure for Managing Bandwidth In theory, managing bandwidth sharing among several cameras is as easy as adjusting the interpacket delay.
AW00049313000 Network Related Camera Parameters and Managing Bandwidth Step 2 - Set the Packet Size parameter on each camera as large as possible. Using the largest possible packet size has two advantages, it increases the efficiency of network transmissions between the camera and the PC and it reduces the time required by the PC to process incoming packets. The largest packet size setting that you can use with your camera is determined by the largest packet size that can be handled by your network.
Network Related Camera Parameters and Managing Bandwidth AW00049313000 Step 3 - Set the Bandwidth Reserve parameter for each camera. The Bandwidth Reserve parameter setting for a camera determines how much of the bandwidth assigned to that camera will be reserved for lost packet resends and for asynchronous traffic such as commands sent to the camera. If you are operating the camera in a relatively EMI free environment, you may find that a bandwidth reserve of 2% or 3% is adequate.
AW00049313000 Network Related Camera Parameters and Managing Bandwidth Step 5 - Calculate “data bandwidth assigned” to each camera. For each camera, there is a parameter called Bandwidth Assigned. This read only parameter indicates the total bandwidth that has been assigned to the camera. The Bandwidth Assigned parameter includes both the bandwidth that can be used for frame data transmission plus the bandwidth that is reserved for packet resents and camera control signals.
Network Related Camera Parameters and Managing Bandwidth AW00049313000 Step 7 - Check that the total bandwidth assigned is less than the network capacity. 1. For each camera, determine the current value of the Bandwidth Assigned parameter. The value is in Byte/s. (Make sure that you determine the value of the Bandwidth Assigned parameter after you have made any adjustments described in the earlier steps.) 2. Find the sum of the current Bandwidth Assigned parameter values for all of the cameras.
AW00049313000 Camera Functional Description 6 Camera Functional Description This chapter provides an overview of the camera’s functionality from a system perspective. The overview will aid your understanding when you read the more detailed information included in the later chapters of the user’s manual. 6.1 Monochrome Camera Overview Each camera employs a single line CCD sensor chip designed for monochrome imaging and provides features such as electronic exposure time control and anti-blooming.
Camera Functional Description AW00049313000 The frame buffer between the sensor and the Ethernet controller allows data to be read out of the sensor at a rate that is independent of the data transmission rate between the camera and the host computer. This ensures that the data transmission rate has no influence on image quality. CCD Sensor VGC ADC VGC ADC Even Shift Register Pixels Odd Shift Register Fig.
AW00049313000 Camera Functional Description 6.2 Color Camera Overview Each camera employs a tri-linear CCD sensor chip designed for color imaging and provides features such as electronic exposure time control and anti-blooming. The tri-linear sensor includes three lines of photosensitive elements (pixels). One line is covered with a red filter, one line with a green filter, and one line with a blue filter to provide spectral separation.
Camera Functional Description AW00049313000 CCD Sensor Red Pixels VGC ADC VGC ADC VGC ADC Shift Register Green Pixels Shift Register Blue Pixels Shift Register Fig.
AW00049313000 Physical Interface 7 Physical Interface This chapter provides detailed information, such as pinouts and voltage requirements, for the physical interface on the camera. This information will be especially useful during your initial design-in process. 7.1 General Description of the Connections The camera is interfaced to external circuity via connectors located on the back of the housing: An 8-pin RJ-45 jack used to provide a 100/1000 Mbit/s Ethernet connection to the camera.
Physical Interface AW00049313000 7.2 Connector Pin Assignments and Numbering 7.2.1 Pin Assignments for the 12-Pin Receptacle The 12 pin receptacle is used to access the three physical input lines and two physical output lines on the camera. The pin assignments for the receptacle are shown in Table 4.
AW00049313000 7.2.2 Physical Interface Pin Assignments for the 6-Pin Receptacle The 6 pin receptacle is used to supply power to the camera. The pin assignments for the receptacle are shown in Table 5. Pin Designation 1 +12 VDC Camera Power (+12 VDC ± 10%) * 2 +12 VDC Camera Power (+12 VDC ± 10%) * 3 Not Connected 4 Not Connected 5 DC Ground ** 6 DC Ground ** Table 5: Pin Assignments for the 6-pin Receptacle Note * Pins 1 and 2 are tied together inside of the camera.
Physical Interface 7.2.4 AW00049313000 Pin Numbering Pin numbering for the camera’s 6-pin and 12-pin receptacles is as shown in Figure 17. Pin numbering for the 8-pin RJ-45 jack adheres to the Ethernet standard. 10 2 9 1 3 4 11 5 8 7 6 12 2 1 6 5 34 Fig.
AW00049313000 Physical Interface 7.3 Connector Types 7.3.1 RJ-45 Jack The 8-pin jack for the camera’s Ethernet connection is a standard RJ-45 connector. The recommended mating connector is any standard 8-pin RJ-45 plug. Green and Yellow LEDs This RJ-45 jack on the camera includes a green LED and a yellow LED. When the green LED is lit, it indicates that an active network connection is available. When the yellow LED is lit, it indicates that data is being transmitted via the network connection. 7.3.
Physical Interface AW00049313000 7.4 Cabling Requirements 7.4.1 Power Cable The end of the power cable that connects to the camera’s 6-pin connector must be terminated with a Hirose micro plug (part number HR10A-7P-6S) or the equivalent. The cable must be wired as shown in Figure 19. For proper EMI protection, the power cable terminated with the Hirose connector and attached to the camera must be a twin-cored, shielded cable.
AW00049313000 7.4.2 Physical Interface I/O Cable The end of the I/O cable that connects to the camera’s 12-pin connector must be terminated with a Hirose micro plug (part number HR10A-10P-12S) or the equivalent. The cable must be wired as shown in Figure 19. The maximum length of the I/O cable is 10 meters, however, we strongly recommend keeping I/O cables as short as possible. The cable must be shielded and must be constructed with twisted pair wire.
Physical Interface 7.4.3 AW00049313000 Ethernet Cables Use high-quality Ethernet cables. To avoid EMI, the cables must be shielded. Use of category 6 or category 7 cables with S/STP shielding is strongly recommended. As a general rule, applications with longer cables or applications in harsh EMI conditions require higher category cables.
AW00049313000 Physical Interface 7.5 Camera Power Camera power must be supplied to the 6-pin connector on the camera via a cable from your power supply. Nominal operating voltage is +12 VDC (± 10%) with less than one percent ripple. Power consumption is as shown in the specification tables in Section 1 of this manual. Close proximity to strong magnetic fields should be avoided. Applying Incorrect Power Can Damage the Camera CAUTION The camera’s nominal operating voltage is +12 VDC (± 10%).
Physical Interface AW00049313000 7.6 Ethernet GigE Device Information The camera uses a standard Ethernet GigE transceiver. The transceiver is fully 100/1000 Base-T 802.3 compliant.
AW00049313000 Physical Interface 7.7 Input and Output Lines 3.3 V 12-Pin Receptacle Gnd I/O In 1 + Gnd 120 Receiver I/O In 1 Gnd To FPGA (control) 8.2 k 10 k Driver 3.3 V I/O Out 1 I/O Out 1 + RS-422 Transceiver I/O In 1 I/O In 1 + I/O In 3 I/O In 3 + Gnd Not Connected I/O Out 1 I/O Out 1 + I/O In 2 I/O In 2 + I/O Out 2 I/O Out 2 + 1 2 3 4 5 10 6 7 8 9 11 12 The pinout diagram applies when RS-422 or RS-644 differential input signals are used.
Physical Interface 7.7.1 AW00049313000 Input Lines The camera is equipped with three physical input lines designated as Input Line 1, Input Line 2, and Input Line 3. The input lines are accessed via the 12-pin connector on the back of the camera. The inputs are designed to accept RS-422 differential signals, but they can also be used with RS-644 low voltage differential signals or low voltage TTL signals. 7.7.1.
AW00049313000 Physical Interface The RS-422 standard allows devices to be used with a bus structure to form an interface circuit. So, for example, input line 1 on several different cameras can be connected via an RS-422 bus as shown in Figure 22. RO RO R1 R3 D RT R4 RO R2 RO Fig.
Physical Interface AW00049313000 Using the Inputs with RS-644 LVDS The inputs on the camera can accept RS-644 low voltage differential signals (LVDS). If you are supplying an RS-644 LVDS signal to an input on the camera, the 120 ohm termination resistor on that input must be enabled. The input will not reliably react to RS-644 signals if the resistor is disabled. For the camera’s I/O circuitry to operate properly, you must supply a ground as shown in Figure 21.
AW00049313000 Physical Interface Using the Inputs with LVTTL A camera input line can accept a Low Voltage TTL signal when the signal is input into the camera as shown in Figure 23. The following voltage requirements apply to the camera’s I/O input (pin 2 of the 12-pin receptacle): Voltage Significance +0 to + 5.0 VDC Recommended operating voltage. +0 to +0.8 VDC The voltage indicates a logical 0. > +0.8 to +2.0 VDC > +2.0 VDC +6.
Physical Interface AW00049313000 Enabling and Disabling the Termination Resistor You can select an input line and enable or disable the termination resistor on the line from within your application software by using the pylon API. The following code snippet illustrates using the API to set the parameter values: Camera.LineSelector.SetValue( LineSelector_Line1 ); Camera.LineTermination.
AW00049313000 Physical Interface You can also use the Basler pylon Viewer application to easily set the parameters. For more information about the pylon Viewer, see Section 3.1 on page 17. Input Line Inverters You can set each individual input line to invert or not to invert the incoming electrical signal. To set the invert function on an input line: Use the Line Selector to select an input line.
Physical Interface AW00049313000 For detailed information about selecting an input line as the source signal for the shaft encoder model Phase A or Phase B input, see Section 8.3 on page 114. Default Input Line Selections By default: Input Line 1 is selected as the source signal for the camera’s Line Start Trigger function. Input Line 1 is also selected as the source signal for shaft encoder module Phase A input.
AW00049313000 7.7.2 Physical Interface Output Lines The camera is equipped with two physical output lines designated as Output Line 1 and Output Line 2. The output lines are accessed via the 12-pin connector on the back of the camera. The outputs are designed to transmit RS-422 differential signals, but they can also be used with RS-644 low voltage differential signalling or low voltage TTL signalling. 7.7.2.
Physical Interface AW00049313000 Using the Outputs with RS-644 LVDS You cannot directly use the RS-422 signal from a camera output line as an input to an RS-644 low voltage differential signal (LVDS) receiver. However, if a resistor network is placed on the camera’s output as shown in Figure 25, you can use the signal from the camera’s output line as an input to an RS-644 device. For the camera’s I/O circuitry to operate properly, you must supply a ground as shown in Figure 25.
AW00049313000 Physical Interface Using the Outputs with LVTTL You can use a camera output line as an input to a low voltage TTL receiver, but only if the camera’s output signal is used as shown in Figure 26. In this situation, a low will be indicated by a camera output voltage near zero, and a high will be indicated by a camera output voltage of approximately 3.3 VDC. These voltages are within the typically specified levels for low voltage TTL devices.
Physical Interface AW00049313000 You can set the signal width from within your application by using a dynamic API. The following code snippet illustrates setting parameter value: // Select the output line Camera.LineSelector.SetValue(LineSelector_Out1); // Set the parameter value to 10.0 microseconds Camera.MinOutPulseWidthAbs.SetValue(10.0); For more information about selecting the source signal for an output line on the camera, see Section 7.7.2.4 on page 65.
AW00049313000 7.7.2.4 Physical Interface Selecting the Source Signal for an Output Line To make a physical output line useful, you must select a source signal for the output line.
Physical Interface AW00049313000 // Select output line 1 as a user output Camera.LineSelector.SetValue( LineSelector_Out1 ); Camera.LineSource.SetValue( LineSource_UserOutput ); For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference. You can also use the Basler pylon Viewer application to easily set the parameters. For more information about the pylon Viewer, see Section 3.1 on page 17.
AW00049313000 Physical Interface //Set the state of output line 2 and then read the state Camera.UserOutputSelector.SetValue( UserOutputSelector_UserOutput2 ); Camera.UserOutputValue.SetValue( true ); bool currentUserOutput2State = Camera.UserOutputValue.GetValue( ); For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference. You can also use the Basler pylon Viewer application to easily set the parameters.
Physical Interface 7.7.3 AW00049313000 Checking the State of the I/O Lines Checking the State of All I/O Lines You can determine the current state of all input and output lines with a single operation. To check the state of all lines: Read the value of the Line Status All parameter. You can read the Line Status All parameter value from within your application software by using the pylon API.
AW00049313000 7.7.4 Physical Interface I/O Line Response Times In general, the response characteristics for the I/O lines on the camera are as follows: Propagation delay for an input receiver (input pins on the camera to the camera’s FPGA) is less than 70 ns. Propagation delay for an output driver (camera FPGA to the output pins on the camera) is less than 20 ns. Signal rise time and signal fall time for the output driver is less than 12.5 ns.
Physical Interface 70 AW00049313000 Basler runner
AW00049313000 Acquisition Control 8 Acquisition Control This section provides detailed information about controlling the acquisition of image information. You will find details about triggering frame and line acquisition, about setting the exposure time for acquired lines, about setting the cameras line acquisition rate, and about how the camera’s maximum allowed line acquisition rate can vary depending on the current camera settings. 8.1 Defining a Frame 8.1.
Acquisition Control AW00049313000 The Height parameter determines the number of lines that will be included in each frame. For example, assume that the Height parameter is set to 100 and that the camera has just started to acquire lines. In this case, the camera will accumulate acquired line data in an internal buffer until 100 lines have been accumulated.
AW00049313000 Acquisition Control Note The Width and Height parameters cannot be changed while the camera is in the process of acquiring frames. If the camera receives commands to change the Width or Height parameter values while it is in the process of acquiring frames: If the camera is set for single frame mode, the parameters will not change until the current frame is complete or you issue an acquisition stop command.
Acquisition Control 8.1.2 AW00049313000 Defining a Frame on Color Cameras As with any other line scan camera, the sensor in a Gigabit Ethernet (GigE) camera is used to perform a series of line acquisitions as an object passes the camera. But unlike many other cameras, GigE line scan cameras do not transmit the pixel data from each individual line to a host PC immediately after the line acquisition is complete. Instead, GigE cameras accumulate acquired lines in a buffer and assembles them into a "frame".
AW00049313000 Acquisition Control The Height parameter determines the number of lines that will be included in each frame. For example, assume that the Height parameter is set to 100 and that the camera has just started to acquire lines. In this case, the camera will accumulate acquired line data in an internal buffer until 100 lines have been accumulated.
Acquisition Control AW00049313000 Note The Width and Height parameters cannot be changed while the camera is in the process of acquiring frames. If the camera receives commands to change the Width or Height parameter values while it is in the process of acquiring frames: If the camera is set for single frame mode, the parameters will not change until the current frame is complete or you issue an acquisition stop command.
AW00049313000 Acquisition Control 8.2 Controlling Acquisition Five major elements are involved in controlling the acquisition of images: Acquisition start and acquisition stop commands The acquisition mode parameter Acquisition start triggering Frame start triggering Line start triggering 8.2.1 Acquisition Start and Stop Commands and the Acquisition Mode The use of Acquisition Start and Acquisition Stop commands and the camera’s Acquisition Mode parameter setting are related.
Acquisition Control AW00049313000 Setting the Acquisition Mode and Issuing Start/Stop Commands You can set the Acquisition Mode parameter value and you can issue Acquisition Start or Acquisition Stop commands from within your application software by using the pylon API. The code snippet below illustrates using the API to set the Acquisition Mode parameter value and to issue an Acquisition Start command. Note that the snippet also illustrates setting several parameters regarding frame and line triggering.
AW00049313000 Acquisition Control A primary feature of the acquisition start trigger is that after an acquisition start trigger signal has been applied to the camera and the camera has entered the "waiting for frame start trigger" acquisition status, the camera will return to the "waiting for acquisition start trigger" acquisition status once a specified number of frame start triggers has been received.
Acquisition Control AW00049313000 When the Acquisition Start Trigger Mode parameter is set to on, the camera’s Acquisition Mode parameter must be set to continuous. 8.2.2.3 Acquisition Frame Count When the Trigger Mode parameter for the acquisition start trigger is set to on, you must set the value of the camera’s Acquisition Frame Count parameter. The value of the Acquisition Frame Count can range from 1 to 65535.
AW00049313000 8.2.2.4 Acquisition Control Setting The Acquisition Start Trigger Mode and Related Parameters You can set the Trigger Mode and Trigger Source parameter values for the acquisition start trigger and the Acquisition Frame Count parameter value from within your application software by using the pylon API.
Acquisition Control 8.2.3 AW00049313000 Frame Start Triggering The frame start trigger is used in conjunction with the line start trigger to control the acquisition of the lines that will be included in each frame. In essence, the frame start trigger is an enabler for the line start trigger, i.e., the camera will only react to line start triggers when the frame start trigger is valid.
AW00049313000 8.2.3.2 Acquisition Control Frame Start Trigger Mode = On When the Frame Start Trigger Mode parameter is set to on, you must select a source signal for the frame start trigger. The Frame Start Trigger Source parameter specifies the source of the signal.
Acquisition Control AW00049313000 Note By default, Input Line 3 is selected as the source signal for the Frame Start Trigger. If the Frame Start Trigger Source parameter is set to Shaft Encoder Module Out, the recommended setting for the Frame Start Trigger Activation parameter is Rising Edge.
AW00049313000 8.2.3.3 Acquisition Control Setting the Frame Start Trigger Parameters You can set the Trigger Mode, Trigger Source, and Trigger Activation parameter values for the frame start trigger from within your application software by using the pylon API. If your settings make it necessary, you can also issue a Trigger Software command.
Acquisition Control 8.2.3.4 AW00049313000 Frame Timeout The Frame Timeout allows setting a maximum time (in microseconds) that may elapse for each frame acquisition, i.e. the maximum time for the acquisition of the lines for a frame. When the frame timeout is enabled and a time is set a partial frame will be transmitted if the set time has elapsed before all lines specified for the frame are acquired. In addition, a frame timeout event will be generated if it was enabled.
AW00049313000 8.2.4 Acquisition Control Line Start Triggering The line start trigger is used to start a line acquisition. Keep in mind that the camera will only react to a line start trigger when the frame start trigger is valid. If the frame start trigger is invalid, line start triggers will be ignored. The first parameter associated with the line start trigger is the Trigger Mode parameter. The Trigger Mode parameter has two available settings: off and on. 8.2.4.
Acquisition Control 8.2.4.2 AW00049313000 Line Start Trigger Mode = On When the Line Start Trigger Mode parameter is set to on, you must select a source signal for the line start trigger. The Line Start Trigger Source parameter specifies the source signal.
AW00049313000 Acquisition Control Exposure Time Control with Line Start Trigger Mode On When the Line Start Trigger Mode parameter is set to on, there are three modes available to control the exposure time for each acquired line: trigger width control, timed control, and control off. You can set the camera’s Exposure Mode parameter to select one of the exposure time control modes. The modes are explained in detail below.
Acquisition Control AW00049313000 Timed Exposure Control Mode When the timed exposure control mode is selected, the exposure time for each line acquisition is determined by the value of the camera’s Exposure Time parameters. If the camera is set for rising edge triggering, the exposure time starts when the source signal for the line start trigger rises. If the camera is set for falling edge triggering, the exposure time starts when the source signal falls.
AW00049313000 Acquisition Control Exposure Start and Exposure End Delays When the line start trigger mode is set to on and an input line is selected as the source signal for the line start trigger, there is a delay between the transition of the line start signal and the actual start of exposure. For example, if you are using the timed exposure mode with rising edge triggering, there is a delay between the rise of the signal and the actual start of exposure. There is also an exposure end delay, i.e.
Acquisition Control 8.2.4.3 AW00049313000 Setting the Line Start Trigger Parameters You can set the Trigger Mode, Trigger Source, and Trigger Activation parameter values for the line start trigger from within your application software by using the pylon API. If your settings make it necessary, you can also select an exposure mode and set the exposure time.
AW00049313000 8.2.5 Acquisition Control Exposure Time As described in Section 8.2.4.1 on page 87, when you are operating the camera with the Line Start Trigger Mode set to Off, the exposure time for each line acquisition will be determined by the camera’s exposure time parameters. As described in Section 8.2.4.
Acquisition Control 8.2.5.2 AW00049313000 Exposure Time Parameters If you are operating the camera in either of the following ways, you must specify an exposure time by setting the camera’s exposure time parameters: the Line Start Trigger Mode is set to Off the Line Start Trigger Mode is set to On and the Timed Exposure Time Control Mode is selected There are two ways to specify the exposure time: by setting "raw" parameter values or by setting an "absolute" parameter value.
AW00049313000 Acquisition Control You can set the Exposure Time Raw and Exposure Time Base Abs parameter values from within your application software by using the pylon API. The following code snippet illustrates using the API to set the parameter values: Camera.ExposureMode.SetValue( ExposureMode_Timed ); Camera.ExposureTimeRaw.SetValue( 2 ); Camera.ExposureTimeBaseAbs.SetValue( 25.0 ); For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference.
Acquisition Control 8.2.6 AW00049313000 Use Case Descriptions and Diagrams The following pages contain a series of use case descriptions and diagrams. The descriptions and diagrams are designed to illustrate how acquisition start triggering, frame start triggering and line start triggering will work with common combinations of parameter settings. These use cases do not represent every possible combination of the parameters associated with acquisition start, frame start, and line start triggering.
AW00049313000 Acquisition Control Settings: Acquisition Mode = Single Frame Acquisition Start Trigger Mode = Off Frame Start Trigger Mode = Off Lines Per Frame (Height) = 3 Line Start Trigger Mode = Off = trigger signal internally generated by the camera = camera is waiting for an acquisition start trigger signal = camera is waiting for a frame start trigger signal = camera is waiting for a line start trigger signal = line exposure and readout = frame transmitted Acquisition start command Acquisition st
Acquisition Control AW00049313000 Use Case 2 - Acquisition Start, Frame Start, and Line Start Triggering Off (Free Run), Continuous Frame Mode Use case two is illustrated on page 98. This use case is equivalent to the preceding use case one, except for the fact that the acquisition mode is set to Continuous Frame. In this use case, the Acquisition Start Trigger Mode, the Frame Start Trigger Mode, and the Line Start Trigger Mode parameters are all set to off.
AW00049313000 Acquisition Control Settings: Acquisition Mode = Continuous Frame Acquisition Start Trigger Mode = Off Frame Start Trigger Mode = Off Lines Per Frame (Height) = 3 Line Start Trigger Mode = Off = trigger signal internally generated by the camera = camera is waiting for an acquisition start trigger signal = camera is waiting for a frame start trigger signal = camera is waiting for a line start trigger signal = line exposure and readout = complete frame transmitted = partial frame transmitted
Acquisition Control AW00049313000 Use Case 3 - Acquisition Start and Line Start Triggering Off (Free Run), Frame Start Triggering On Use case three is illustrated on page 101. In this use case, the Acquisition Start Trigger Mode and the Line Start Trigger Mode parameters are set to off. The camera will internally manage acquisition start and line start trigger signals without any need for triggering by the user ("free run").
AW00049313000 Acquisition Control Settings: Acquisition Mode = Continuous Frame Acquisition Start Trigger Mode = Off Frame Start Trigger Mode = On Frame Start Trigger Source = Line 2 Frame Start Trigger Activation = Rising Edge Lines Per Frame (Height) = 3 Line Start Trigger Mode = Off = trigger signal internally generated by the camera = trigger signal applied by the user = camera is waiting for an acquisition start trigger signal = camera is waiting for a frame start trigger signal = camera is waiting f
Acquisition Control AW00049313000 Use Case 4 - Acquisition Start Triggering Off (Free Run), Frame Start and Line Start Triggering On Use case four is illustrated on page 102. In this use case, the Acquisition Start Trigger Mode parameter is set to off. The camera will internally manage acquisition start trigger signals without any need for triggering by the user ("free run").
AW00049313000 Acquisition Control Settings: Acquisition Mode = Continuous Frame Acquisition Start Trigger Mode = Off Frame Start Trigger Mode = On Frame Start Trigger Source = Line 2 Frame Start Trigger Activation = Rising Edge Lines Per Frame (Height) = 3 Line Start Trigger Mode = On Line Start Trigger Source = Line 3 Line Start Trigger Activation = Rising Edge = trigger signal internally generated by the camera = trigger signal applied by the user = camera is waiting for an acquisition start trigger sig
Acquisition Control AW00049313000 Use Case 5 - Acquisition Start Triggering Off (Free Run), Frame Start and Line Start Triggering On, Frame Start Trigger Level High, Partial Closing Frame False Use case five is illustrated on page 105. In this use case, the Acquisition Start Trigger Mode parameter is set to off. The camera will internally manage acquisition start trigger signals without any need for triggering by the user ("free run").
AW00049313000 Acquisition Control Settings: Acquisition Mode = Continuous Frame Acquisition Start Trigger Mode = Off Frame Start Trigger Mode = On Frame Start Trigger Source = Line 2 Frame Start Trigger Activation = Level High Partial Closing Frame = False Lines Per Frame (Height) = 3 Line Start Trigger Mode = On Line Start Trigger Source = Line 3 Line Start Trigger Activation = Rising Edge = trigger signal internally generated by the camera = trigger signal applied by the user = camera is waiting for an
Acquisition Control AW00049313000 Use Case 6 - Acquisition Start Triggering Off (Free Run), Frame Start and Line Start Triggering On, Frame Start Trigger Level High, Partial Closing Frame True Use case six is illustrated on page 107. This use case is equivalent to the preceding use case five, except for the fact that Partial Closing Frame is set to True. In this use case, the Acquisition Start Trigger Mode parameter is set to off.
AW00049313000 Acquisition Control Settings: Acquisition Mode = Continuous Frame Acquisition Start Trigger Mode = Off Frame Start Trigger Mode = On Frame Start Trigger Source = Line 2 Frame Start Trigger Activation = Level High Partial Closing Frame = True Lines Per Frame (Height) = 3 Line Start Trigger Mode = On Line Start Trigger Source = Line 3 Line Start Trigger Activation = Rising Edge = trigger signal internally generated by the camera = trigger signal applied by the user = camera is waiting for an
Acquisition Control AW00049313000 Use Case 7 - Acquisition Start and Frame Start Triggering Off (Free Run), Line Start Triggering On Use case seven is illustrated on page 109. This use case is equivalent to use case two, except for the fact that the Line Start Trigger Mode parameter is set to on. In this use case, the Acquisition Start Trigger Mode and the Frame Start Trigger Mode parameters are set to off.
AW00049313000 Acquisition Control Settings: Acquisition Mode = Continuous Frame Acquisition Start Trigger Mode = Off Frame Start Trigger Mode = Off Lines Per Frame (Height) = 3 Line Start Trigger Mode = On Line Start Trigger Source = Line 3 Line Start Trigger Activation = Rising Edge = trigger signal internally generated by the camera = trigger signal applied by the user = camera is waiting for an acquisition start trigger signal = camera is waiting for a frame start trigger signal = camera is waiting fo
Acquisition Control AW00049313000 Use Case 8 - Acquisition Start Triggering On, Frame Start and Line Start Triggering Off (Free Run) Use case eight is illustrated on page 110. In this use case, the Acquisition Start Trigger Mode parameter is set to on, requiring that an acquisition start trigger signal is applied to the camera. The Frame Start Trigger Mode and the Line Start Trigger Mode parameters are set to off.
AW00049313000 Acquisition Control Settings: Acquisition Mode = Continuous Frame Acquisition Start Trigger Mode = On Acquisition Start Trigger Source = 1 Acquisition Start Trigger Activation = Rising Edge Acquisition Frame Count = 2 Frame Start Trigger Mode = Off Lines Per Frame (Height) = 3 Line Start Trigger Mode = Off = trigger signal internally generated by the camera = trigger signal applied by the user = camera is waiting for an acquisition start trigger signal = camera is waiting for a frame start
Acquisition Control AW00049313000 Use Case 9 - Acquisition Start and Line Start Triggering On, Frame Start Triggering Off (Free Run) Use case nine is illustrated on page 113. In this use case, the Acquisition Start Trigger Mode and the Line Start Trigger Mode parameters are set to on, requiring that an acquisition start and a line start trigger signal are applied to the camera. The Frame Start Trigger Mode parameter is set to off.
AW00049313000 Acquisition Control Settings: Acquisition Mode = Continuous Frame Acquisition Start Trigger Mode = On Acquisition Start Trigger Source = 1 Acquisition Start Trigger Activation = Rising Edge Acquisition Frame Count = 2 Frame Start Trigger Mode = Off Lines Per Frame (Height) = 3 Line Start Trigger Mode = On Line Start Trigger Source = Line 3 Line Start Trigger Activation = Rising Edge = trigger signal internally generated by the camera = trigger signal applied by the user = camera is waiting
Acquisition Control AW00049313000 8.3 The Shaft Encoder Module The camera is equipped with a shaft encoder software module. The module can accept input from a two channel shaft encoder (Phase A and Phase B). The module outputs a signal that can be used, for example, as a source signal for the line start trigger function or the frame start trigger function in the camera. Figure 43 shows a typical implementation of the shaft encoder software module in the camera.
AW00049313000 Acquisition Control Note that if this interpretation of direction is not as you desire, you could change it by moving the Phase A output from the shaft encoder to input line 2 and the Phase B output to input line 1. Shaft Encoder Module Parameters There are several parameters and commands associated with the shaft encoder module. The list below describes the parameters and commands and explains how they influence the operation of the module.
Acquisition Control AW00049313000 Setting the Shaft Encoder Module Parameters To use the shaft encoder software module effectively, you should do the following: Select a signal source for the Phase A and Phase B inputs on the module. (By default, input line 1 is selected as the signal source for the Phase A input and input line 2 is selected as the signal source for the Phase B input.) Make sure that the output from the encoder module is selected as the signal source for a camera function.
AW00049313000 Acquisition Control The Reverse Counter The main purpose of the reverse counter is to compensate for mechanical "jitter" in the conveyor used to move objects past the camera. This jitter usually manifests itself as a momentary change in the direction of the conveyor. The rules that govern the operation of the reverse counter are as follows: If the conveyor is running in the reverse direction and the current reverse counter count is less than the maximum (i.e.
Acquisition Control AW00049313000 Case 2 In this case, assume that: A shaft encoder is attached to a conveyor belt that normally moves continuously in the forward direction past a camera. The conveyor occasionally "jitters" and when it jitters, it moves in reverse for 4 or 5 ticks. For this case, the Shaft Encoder Module Mode parameter should be set to Forward Only. The Shaft Encoder Module Reverse Counter Max should be set to a value that is higher than the jitter we expect to see.
AW00049313000 Acquisition Control Case 3 In this case, assume that: We are working with a small conveyor that moves back and forth in front of a camera. A shaft encoder is attached to the conveyor. The conveyor moves in the forward direction past the camera through its complete range of motion, stops, and then begins moving in reverse. The conveyor moves in the reverse direction past the camera through its complete range of motion, stops, and then begins moving forward.
Acquisition Control 4 AW00049313000 The conveyor reaches the end of its forward travel and it stops. Camera Stop 5 Camera The conveyor begins moving in reverse and the shaft encoder starts generating reverse ticks. The reverse counter will increment by 1 for each reverse tick.
AW00049313000 Acquisition Control Camera 9 The conveyor reaches the end of its reverse travel and it stops. Stop 10 Camera The conveyor begins moving forward and the shaft encoder starts generating forward ticks. The reverse counter is at 10 and will now begin decrementing by 1 for each forward tick. While the reverse counter is decrementing and the reverse count is greater than 0, the output of trigger signals from the module is suppressed Camera The reverse counter reaches 0.
Acquisition Control 8.4 AW00049313000 Frequency Converter The camera is equipped with a frequency converter module that allows triggering the camera at a frequency that differs from the frequency of the input signals received. The module can accept input signals from one of the three input lines or signals (ticks) from the shaft encoder module. The frequency converter module includes three sub-modules acting in sequence on the original signals: The pre-divider module receives the input signals.
AW00049313000 Acquisition Control changed from the divider module. You can use the frequency converter to multiply the original signal frequency by a fractional value. We recommend multiplying the frequency by the enumerator value using the multiplier module and dividing the resulting frequency by the denominator value using the post-divider module. You can configure the frequency converter module from within your application by using a dynamic API.
Acquisition Control AW00049313000 8.5 Acquisition Monitoring Tools The camera includes the acquisition status feature and generates four output signals that you can use to monitor the progress of line and frame acquisition by the camera: the exposure active signal, the acquisition trigger wait signal, the frame trigger wait signal, and the line trigger wait signal. The camera also allows selecting the output of the frequency converter module or the shaft encoder module as output signals. 8.5.
AW00049313000 8.5.2 Acquisition Control Acquisition Status If a camera receives a software acquisition start trigger signal when it is not in a "waiting for acquisition start trigger" acquisition status, it will simply ignore the trigger signal and will generate an acquisition start overtrigger event.
Acquisition Control 8.5.3 AW00049313000 Acquisition Trigger Wait Signal The camera’s Acquisition Trigger Wait output signal will be low when the camera is in the process of acquiring a frame and is not able to accept a new acquisition start trigger. As soon as the camera is ready to accept a new acquisition start trigger, the Acquisition Trigger Wait signal will go high. This signal can be selected as the source signal for one of the output lines on the camera.
AW00049313000 Acquisition Control 8.6 Frame Transmission Time As mentioned in earlier sections of this chapter, each time that a complete frame has been accumulated in the camera’s frame memory, the frame will be transmitted from the camera to your host PC via the camera’s Ethernet network connection. The image data in the frame will be packetized and transmitted in compliance with the mechanisms described in the GigE Vision standard.
Acquisition Control 8.7 AW00049313000 Maximum Allowed Line Acquisition Rate In general, the maximum allowed line acquisition rate can be limited by three factors: The amount of time it takes to read an acquired line out of the imaging sensor and into the camera’s frame buffer. Since readout time is fixed, it establishes an absolute maximum for the line rate. Note that the readout time stays the same regardless of the Width parameter setting for the frame. The exposure time for acquired lines.
AW00049313000 Acquisition Control Max Lines/s (based on sensor readout) ruL102419gm ruL102436gm ruL102457gm ruL204810gm ruL204819 gm ruL204830gm ruL209810gc 18700 35700 56100 9700 18700 29200 9200 Factor 2: Factor 2 is the exposure time.
Acquisition Control AW00049313000 could increase the packet size or decrease the inter-packet delay. If you are using several cameras connected to the host PC via a network switch, you could also use a multiport network adapter in the PC instead of a switch. This would allow you to increase the Ethernet bandwidth assigned to the camera and thus decrease the transmission time. For more information on the settings that determine the bandwidth assigned to the camera, see Section 5.2 on page 33.
AW00049313000 Spatial Correction on Color Cameras 9 Spatial Correction on Color Cameras This section provides detailed information about the spatial correction feature available on color cameras. The section also provides information about system design considerations regarding spatial correction. 9.1 What is Spatial Correction As shown in Figure 46, the sensor used in color cameras has three lines of pixels and the lines are spaced 112 µm apart center-to-center. Sensor Face 112 µm 112 µm Fig.
Spatial Correction on Color Cameras Sensor AW00049313000 Camera Object Passing Camera Fig.
AW00049313000 Spatial Correction on Color Cameras Now, consider a single small area on an object passing the camera and define it as area "A". Figure 48 illustrates that as the object passes the camera, area A will fall into the line of view of the red line, of the green line, and of the blue line at three different times. This means that the red information for area A, the green information for area A, and the blue information for area A will be collected during three different acquisitions.
Spatial Correction on Color Cameras AW00049313000 To better understand the concept of spatial correction, consider a simple example. In this example, we will make the following assumptions: The optics and the distance between the camera and the conveyor have been arranged so that we have a 1 to 10 magnification. This means that an area of 0.14 mm x 0.14 mm on the object will create a 14 µm x 14 µm image on one pixel. We have an encoder on our system and each step of the encoder represents a 0.
AW00049313000 9.1.1 Spatial Correction on Color Cameras The Spatial Correction Parameter You use a single parameter, the Spatial Correction parameter, to enable or disable spatial correction and to set two aspects of spatial correction: the starting line and the delay. The spatial correction starting line relates to the direction in which the object is passing the camera.
Spatial Correction on Color Cameras AW00049313000 Setting the Spatial Correction Parameter You can set the Spatial Correction parameter value from within your application software by using the pylon API. The following code snippet illustrates using the parameter value to set spatial correction for a red starting line with a delay of +8: //Set the Spatial Correction Camera.SpatialCorrection.
AW00049313000 Spatial Correction on Color Cameras 9.2 Camera Operating Requirements for Proper Spatial Correction To achieve proper spatial correction, certain camera operating requirements should be met. Line Acquisition Line acquisition should be triggered by using the signals from a shaft encoder. Use of the timed exposure control mode is strongly recommended to ensure uniform exposure. The trigger width or trigger mode off exposure modes can be used but only if the conveyor speed is 100% stable.
Spatial Correction on Color Cameras AW00049313000 9.3 System Design Requirements for Proper Spatial Correction As explained earlier in this chapter, spatial correction is used to align color information in the image. For a given area on the object to be acquired correctly, its image must fall precisely on the red line, the green line, and the blue line in the sensor.
AW00049313000 Spatial Correction on Color Cameras Sensor Perpendicularity The sensor lines in the camera must be perpendicular to the conveyor’s line of travel. If the sensor lines are not perpendicular to the line of travel, a slightly different area of the object will fall on each line as shown in Figure 53. This situation will cause haloing that is perpendicular to the transport direction. The halos will be constant in size.
Spatial Correction on Color Cameras AW00049313000 Sensor - Conveyor Parallelism The face of the sensor in the camera and the surface of the conveyor should be in parallel planes. This condition should be met in order to ensure that all of the pixels in the sensor lines view the object at the same magnification. If the camera is positioned so that the sensor is rotated on its long axis as shown in Figure 55, you will see haloing that is perpendicular to the transport direction.
AW00049313000 Spatial Correction on Color Cameras No Sensor’s View Sensor’s View Yes Y Transport Direction Sensor’s View Camera Sensor X No Y X Transport Direction Y X Transport Direction Fig.
Spatial Correction on Color Cameras AW00049313000 Conveyor Travel The conveyor must travel in a straight line. If the conveyor motion is not straight, each line in the sensor will scan a different area of the object as shown in Figure 57. This situation will cause haloing that is perpendicular to the transport direction. The halos will vary in size and color. Yes No Sensor Conveyor Sensor Conveyor Fig.
AW00049313000 9.3.1 Spatial Correction on Color Cameras System Design Calculations Assuming that the camera operating requirements and the system design requirements mentioned earlier in this chapter are met, the formulas below can be used to calculate the basic design criteria for your system.
Spatial Correction on Color Cameras AW00049313000 Example 1 Assume the following conditions: Conveyor movement per line acquisition = 0.2 mm Desired aspect ratio = 1/1 Conveyor Width = 350 mm Length of Sensor Line = 29.372 mm (2098 pixels/line x 14 µm/pixel) With an objective lens in place, the direction of travel of the object will cause the image to cross the red line in the sensor first. 1. Check Table 10 and note that for a 1/1 aspect ratio, n must be set to 8. 2.
AW00049313000 Spatial Correction on Color Cameras Example 2 There is a second approach to calculating system design criteria that is less concerned with aspect ratio. This approach is more tuned towards matching the line of view of the sensor to the width of your conveyor. Example 2 illustrates this approach. Assume the following conditions: Conveyor Width = 350 mm Conveyor Movement per Encoder Step = 0.
Spatial Correction on Color Cameras AW00049313000 Option 1 1. Calculate the conveyor movement that will generate six encoder steps: 6 steps 0.2 mm/step = 1.2 mm 2. Calculate the magnification needed to make 1.2 mm of conveyor movement result in 112 µm movement of the image: 112 µm -------------------- = 0.0933 1.2 mm 1 ------------------ = 10.72 0.0933 = 1 : 10.72 3. Calculate the width of conveyor that will be viewed by each sensor line at this magnification: 29.372 mm 10.72 = 314.
AW00049313000 Spatial Correction on Color Cameras line will be 367.15. Since it is usually more acceptable to have a field of view slightly larger than the conveyor, assume that you choose option 2. 4. Select an appropriate lens and determine the mounting distance for your camera. 5. Make sure that the Spatial Correction parameter is are set correctly. In this example, the parameter setting would be -7. 6. Acquire lines.
Spatial Correction on Color Cameras 148 AW00049313000 Basler runner
AW00049313000 Pixel Data Formats 10 Pixel Data Formats By selecting a pixel data format, you determine the format (layout) of the image data transmitted by the camera. This section provides detailed information about the available pixel data formats. 10.1 Setting the Pixel Data Format The setting for the camera’s Pixel Format parameter determines the format of the pixel data that will be output from the camera. The available pixel formats depend on whether the camera is monochrome or color.
Pixel Data Formats AW00049313000 10.2 Pixel Data Formats for Mono Cameras 10.2.1 Mono 8 Format When a monochrome camera is set for the Mono 8 pixel data format, it outputs 8 bits of brightness data per pixel. The table below describes how the pixel data for a received frame will be ordered in the image buffer in your PC when the camera is set for Mono 8 output.
AW00049313000 Pixel Data Formats 10.2.2 Mono 16 Format When a monochrome camera is set for the Mono 16 pixel data format, it outputs 16 bits of brightness data per pixel with 12 bits effective. The 12 bits of effective pixel data fill from the least significant bit. The four unused most significant bits are filled with zeros. The table below describes how the pixel data for a received frame will be ordered in the image buffer in your PC when the camera is set for Mono 16 output.
Pixel Data Formats AW00049313000 When the camera is set for Mono 16, the pixel data output is 16 bit data of the “unsigned short (little endian)” type. The available range of data values and the corresponding indicated signal levels are as shown in the table below. Note that for a 16 bit data format, you might expect a value range from 0x0000 to 0xFFFF. However, with the camera set for Mono 16 only 12 bits of the 16 bits transmitted are effective.
AW00049313000 Pixel Data Formats 10.2.3 Mono 12 Packed Format When a monochrome camera is set for the Mono 12 Packed pixel data format, it outputs 12 bits of brightness data per pixel. Every three bytes transmitted by the camera contain data for two pixels. The table below describes how the pixel data for a received frame will be ordered in the image buffer in your PC when the camera is set for Mono 12 Packed output.
Pixel Data Formats AW00049313000 When a monochrome camera is set for Mono 12 Packed, the pixel data output is 12 bit data of the “unsigned” type. The available range of data values and the corresponding indicated signal levels are as shown in the table below.
AW00049313000 Pixel Data Formats 10.2.4 YUV 4:2:2 Packed Format When a monochrome camera is set for the YUV 4:2:2 Packed pixel data format, the camera transmits Y, U, and V values in a fashion that mimics the output from a color camera set for YUV 4:2:2 Packed. The Y value transmitted for each pixel is an actual 8 bit brightness value similar to the pixel data transmitted when a monochrome camera is set for Mono 8. The U and V values transmitted will always be zero.
Pixel Data Formats AW00049313000 10.3 Pixel Data Formats for Color Cameras 10.3.1 Mono 8 Format When a color camera is set for the Mono 8 pixel data format, the pixel values in each acquired image are first converted to the YUV color model as described later in this chapter for the YUV 4:2:2 Packed format. The camera then transmits the 8 bit Y value for each pixel to the host PC. In the YUV color model, the Y component for each pixel represents a brightness value.
AW00049313000 Pixel Data Formats With the camera set for Mono 8, the pixel data output is 8 bit data of the “unsigned char” type. The available range of data values and the corresponding indicated signal levels are as shown in the table below.
Pixel Data Formats AW00049313000 10.3.2 RGB 8 Packed Format When a color camera is set for the RGB 8 Packed pixel data format, it outputs 8 bits of red data, 8 bits of green data, and 8 bits of blue data for each pixel in the acquired frame. The table below describes how the pixel data for a received frame will be ordered in the image buffer in your PC when the camera is set for RGB 8 Packed output.
AW00049313000 Pixel Data Formats 10.3.3 RGB 12 Packed Format When a color camera is set for the RGB 12 Packed pixel data format, it outputs the following for each pixel in an acquired frame: 16 bits of red data per pixel with 12 bits effective 16 bits of green data per pixel with 12 bits effective 16 bits of blue data per pixel with 12 bits effective The 12 bits of effective data fill from the least significant bit. The four unused most significant bits are filled with zeros.
Pixel Data Formats AW00049313000 When the camera is set for RGB 12 Packed, the pixel data output is 16 bit data of the “unsigned short (little endian)” type. The available range of data values and the corresponding indicated signal levels are as shown in the table below. Note that for 16 bit data, you might expect a value range from 0x0000 to 0xFFFF. However, with the camera set for RGB 12 Packed only 12 bits of the 16 bits transmitted are effective.
AW00049313000 Pixel Data Formats 10.3.4 RGB 12 V1 Packed Format When a color camera is set for the RGB 12 Packed V1 pixel data format, it outputs 12 bits of red data, 12 bits of green data, and 12 bits of blue data for each pixel in an acquired frame. Every nine bytes transmitted by the camera contain red, blue, and green data for two pixels. The table below describes how the pixel data for a received frame will be ordered in the image buffer in your PC when the camera is set for RGB 12 V1 Packed output.
Pixel Data Formats AW00049313000 Bm-8 R Pn-1 11 ... 4 Bm-7 G Pn-1 3 ... 0 Bm-6 G Pn-1 11 ... 4 Bm-5 B Pn-1 11 ... 4 Bm-4 R Pn 3 ... 0 Bm-3 R Pn 11 ... 4 Bm-2 G Pn 11 ... 4 Bm-1 B Pn 3 ... 0 Bm B Pn 11 ... 4 R Pn-1 3 ... 0 B Pn-1 3 ... 0 G Pn 3 ... 0 When a color camera is set for RGB 12 V1 Packed, the pixel data output is 12 bit data of the “unsigned” type.
AW00049313000 Pixel Data Formats 10.3.5 YUV 4:2:2 Packed Format When a color camera is set for the YUV 422 Packed pixel data format, each pixel in an acquired frame goes through a conversion algorithm that transforms the pixel data from the R, G, B color model to the Y, U, V color model. The conversion algorithm uses the following formulas: Y = 0.30 R + 0.59 G + 0.11 B U = - 0.17 R - 0.33 G + 0.50 B V = 0.50 R - 0.41 G - 0.
Pixel Data Formats AW00049313000 B7 Y P3 7 ... 0 B8 U P4 7 ... 0 B9 Y P4 7 ... 0 B10 V P4 7 ... 0 B11 Y P5 7 ... 0 Bm-7 U Pn-3 7 ... 0 Bm-6 Y Pn-3 7 ... 0 Bm-5 V Pn-3 7 ... 0 Bm-4 Y Pn-2 7 ... 0 Bm-3 U Pn-1 7 ... 0 Bm-2 Y Pn-1 7 ... 0 Bm-1 V Pn-1 7 ... 0 Bm Y Pn 7 ... 0 When the camera is set for YUV 4:2:2 Packed output, the pixel data output for the Y component is 8 bit data of the “unsigned char” type.
AW00049313000 Pixel Data Formats The pixel data output for the U component or the V component is 8 bit data of the “straight binary” type. The range of data values for a U or a V component and the corresponding indicated signal levels are shown below.
Pixel Data Formats AW00049313000 10.3.6 YUV 4:2:2 (YUYV) Packed Format On color cameras, the YUV 4:2:2 (YUYV) packed pixel data format is similar to the YUV 4:2:2 pixel format described in the previous section. The only difference is the order of the bytes transmitted to the host PC. With the YUV 4:2:2 format, the bytes are ordered as specified in the DCAM standard issued by the 1394 Trade Association.
AW00049313000 Pixel Data Formats When a color camera is set for YUV 4:2:2 (YUYV) output, the pixel data output for the Y component is 8 bit data of the “unsigned char” type. The range of data values for the Y component and the corresponding indicated signal levels are shown below.
Pixel Data Formats AW00049313000 10.4 Pixel Transmission Sequence For each acquired frame, pixel data is transmitted from the camera in the following sequence: Row 0 Col 0, Row 0 Col 1, Row 0 Col 2 .. .. Row 0 Col m-2, Row 0 Col m-1, Row 0 Col m Row 1 Col 0, Row 1 Col 1, Row 1 Col 2 .. .. Row 1 Col m-2, Row 1 Col m-1, Row 1 Col m Row 2 Col 0, Row 2 Col 1, Row 2 Col 2 .. ..
AW00049313000 Standard Features 11 Standard Features This chapter provides detailed information about the standard features available on each camera. It also includes an explanation of their operation and the parameters associated with each feature. 11.1 Gain and Black Level on Monochrome Cameras 11.1.1 Gain The camera’s gain is adjustable. As shown in Figure 58, increasing the gain increases the slope of the response curve for the camera.
Standard Features AW00049313000 Gain Raw Tap 1 sets an additional amount of gain for the even numbered pixels in the sensor (pixels 0, 2, 4, 6, etc.). The total gain for each even numbered pixel will be the sum of the Gain Raw All parameter value plus the Gain Raw Tap 1 parameter value. Gain Raw Tap 2 sets an additional amount of gain for the odd numbered pixels in the sensor (pixels 1, 3, 5, 7, etc.).
AW00049313000 Standard Features For example, assume that you have set the Gain Raw All to 450, the Gain Raw Tap 1 to 0, and the Gain Raw Tap 2 to 0. Then: Even Pixel Gain = ( 0.0359 x 450) + (0.0359 x 0) Even Pixel Gain = 16.2 dB Odd Pixel Gain = ( 0.0359 x 450) + (0.0359 x 0) Odd Pixel Gain = 16.2 dB Setting the Gain To set the Gain Raw All parameter value: Set the Gain Selector to All. Set the Gain Raw parameter to your desired value.
Standard Features AW00049313000 11.1.2 Black Level Adjusting the camera’s black level will result in an offset to the pixel values output from the camera. As mentioned in the "Functional Description" section of this manual, the sensor uses two different taps to read pixel data out of the imaging sensor. One tap is used to read out values for the even numbered pixels in the sensor and one is used to read out values for the odd numbered pixels.
AW00049313000 Standard Features Setting the Black Level To set the Black Level Raw All value: Set the Black Level Selector to All. Set the Black Level Raw parameter to your desired value. To set the Black Level Raw Tap 1 value: Set the Black Level Selector to Tap 1. Set the Black Level Raw parameter to your desired value. To set the Black Level Raw Tap 2 value: Set the Black Level Selector to Tap 2. Set the Black Level Raw parameter to your desired value.
Standard Features AW00049313000 11.2 Gain, White Balance, and Black Level on Color Cameras 11.2.1 Gain The camera’s gain is adjustable. As shown in Figure 59, increasing the gain increases the slope of the response curve for the camera. This results in an increase in the pixel values output from the camera for a given amount of output from the imaging sensor. Decreasing the gain decreases the slope of the response curve and results in lower pixel values for a given amount of sensor output.
AW00049313000 Standard Features Gain Raw All Gain Raw Red Gain Raw Green Gain Raw Blue Min/Max Value (8 bit depth) Min/Max Value (12 bit depth) Min/Max Value (8 bit depth) Min/Max Value (12 bit depth) Min/Max Value (8 bit depth) Min/Max Value (12 bit depth) Min/Max Value (8 bit depth) Min/Max Value (12 bit depth) 0 / 600 0 / 400 0 / 600 0 / 400 0 / 600 0 / 400 0 / 600 0 / 400 Table 14: Minimum and Maximum Allowed Gain Raw Settings There are also minimum and maximum restrictions for th
Standard Features AW00049313000 Setting the Gain To set the Gain Raw All parameter value: Set the Gain Selector to All. Set the Gain Raw parameter to your desired value. To set the Gain Raw Red parameter value: Set the Gain Selector to Red. Set the Gain Raw Red parameter to your desired value. To set the Gain Raw Green parameter value: Set the Gain Selector to Green. Set the Gain Raw Green parameter to your desired value.
AW00049313000 Standard Features 11.2.2 White Balance White balancing on color cameras can be achieved by manipulating the red, blue, and green gain settings described in the previous section. For example, to make acquired images appear more red, you could increase the value of the Gain Raw Red parameter. And to make the image appear less red, you could decrease the value of the parameter. You can also change the apparent effect of one color by manipulating the gain values for the other two colors.
Standard Features AW00049313000 11.2.3 Black Level Adjusting the camera’s black level will result in an offset to the pixel values output from the camera. As mentioned in the "Functional Description" section of this manual, the sensor uses three different taps to read pixel data out of the imaging sensor. One tap is used to read out values for the red pixels in the sensor, one is used to read out values for the green pixels, and one is used to read out values for the blue pixels.
AW00049313000 Standard Features Setting the Black Level To set the Black Level Raw All value: Set the Black Level Selector to All. Set the Black Level Raw parameter to your desired value. To set the Black Level Raw Red value: Set the Black Level Selector to Red. Set the Black Level Raw parameter to your desired value. To set the Black Level Raw Green value: Set the Black Level Selector to Green. Set the Black Level Raw parameter to your desired value.
Standard Features AW00049313000 11.3 Digital Shift The digital shift feature lets you change the group of bits that is output from each ADC in the camera. Using the digital shift feature will effectively multiply the output of the camera by 2 times, 4 times, 8 times, or 16 times. The next two sections describe how the digital shift feature works when the camera is set for a 12 bit pixel format and when it is set for a 8 bit pixel format.
AW00049313000 Standard Features Shift by 2 When the camera is set to shift by 2, the output from the camera will include bit 9 through bit 0 from each ADC along with 2 zeros as LSBs. ADC bit 11 The result of shifting twice is that the output of the camera is effectively multiplied by 4. bit 10 bit 9 bit 8 bit 7 bit 6 M S B When the camera is set to shift by 2, the 2 least significant bits output from the camera for each pixel value will be 0.
Standard Features AW00049313000 When the camera is set to shift by 4, the 4 least significant bits output from the camera for each pixel value will be 0. This means that the gray value scale will only include every 16th gray value, for example, 16, 32, 48, 64, and so on. If the pixel values being output by the camera’s sensor are high enough to set bit 8, bit 9, bit 10, or bit 11 to 1, we recommend not using shift by 4. If you do nonetheless, all bits output from the camera will automatically be set to 1.
AW00049313000 Standard Features Shift by 2 When the camera is set to shift by 2, the output from the camera will include bit 9 through bit 2 from each ADC. The result of shifting twice is that the output of the camera is effectively multiplied by 4. ADC bit 11 bit 10 bit 9 bit 8 bit 7 bit 6 bit 5 bit 4 bit 3 bit 2 bit 1 bit 0 If the pixel values being output by the camera’s sensor M L are high enough to set bit 10 or bit 11 to 1, we S S B B recommend not using shift by 2.
Standard Features AW00049313000 11.3.3 Precautions When Using Digital Shift There are several checks and precautions that you must follow before using the digital shift feature. The checks and precautions differ depending on whether the camera will be set for a 12 bit pixel format or for an 8 bit pixel format in your application. If you will be using a 12 bit pixel format, make this check: Use the pylon Viewer or the pylon API to set the camera for a 12 bit pixel format and no digital shift.
AW00049313000 Standard Features 11.3.4 Enabling and Setting Digital Shift You can enable or disable the digital shift feature by setting the value of the Digital Shift parameter. When the parameter is set to zero, digital shift will be disabled. When the parameter is set to 1, 2, 3, or 4, digital shift will be set to shift by 1, shift by 2, shift by 3, or shift by 4 respectively. You can set the Digital Shift parameter values from within your application software by using the Basler pylon API.
Standard Features AW00049313000 11.4 Event Reporting Event reporting is available on the camera. With event reporting, the camera can generate an "event" and after some intermediate steps transmit a related event message to the PC whenever a specific situation has occurred. The camera can generate and transmit events for the following types of situations: Overtriggering of the acquisition start trigger has occurred (AcquisitionStartOvertriggerEventData).
AW00049313000 Standard Features During the time that the camera is waiting for an acknowledgement, no new event messages can be transmitted. 4. Event reporting involves some further software-related steps and settings to be made. For more information, see the "Camera Events" code sample included with the pylon software development kit. The Event Queue As mentioned in the example above, the camera has an event queue.
Standard Features AW00049313000 Setting Your System for Event Reporting Event reporting must be enabled in the camera and some additional software-related settings must be made. This is described in the "Camera Events" code sample included with the pylon software development kit. Event reporting must be specifically set up for each type of event using the parameter name of the event and of the supplementary information.
AW00049313000 Standard Features 11.5 Luminance Lookup Table The type of electronics used on the camera allow the camera’s sensor to acquire pixel values at a 12 bit depth. Normally, when a camera is set for a 12 bit pixel data format, the camera uses the actual 12 bit pixel values reported by the sensor. The luminance lookup table feature lets you create a custom 12 bit to 12 bit lookup table that maps the actual 12 bit values output from the sensor to substitute 12 bit values of your choice.
Standard Features AW00049313000 Note There is only one lookup table. When the lookup table is enabled on color cameras, the single table is used for red, green, and blue pixel values. When a color camera is set for a YUV pixel data output format and the lookup table is enabled, the lookup table is applied to the actual red, green, and blue pixel values output from the sensor before the conversion from RGB to YUV is performed.
AW00049313000 Standard Features 4095 3072 Substitute 12 Bit Value 2048 1024 0 0 1024 2048 3072 4095 Actual 12 Bit Sensor Value Fig. 61: Lookup Table with Values Mapped for Higher Camera Output at Low Sensor Readings Using the Luminance Lookup Table to Get 8 Bit Output As mentioned above, when the camera is set for a 12 bit pixel data format, the lookup table can be used to perform a 12 bit to 12 bit substitution. The lookup table can also be used in 12 bit to 8 bit fashion.
Standard Features AW00049313000 You can set the LUT Selector, the LUT Index parameter and the LUT Value parameter from within your application software by using the pylon API. The following code snippet illustrates using the API to set the selector and the parameter values: // Select the lookup table Camera.LUTSelector.SetValue( LUTSelector_Luminance ); // Write a lookup table to the device.
AW00049313000 Standard Features 11.6 Gamma Correction The gamma correction feature lets you modify the brightness of the pixel values output by the camera’s sensor to account for a non-linearity in the human perception of brightness.
Standard Features AW00049313000 11.7 Shading Correction In theory, when a digital camera captures an image of a uniform object, the pixel values output from the camera should be uniform. In practice, however, variations in optics and lighting and small variations in the sensor’s performance can cause the camera output to be non-uniform even when it is capturing images of a uniform object.
AW00049313000 Standard Features Creating a Shading Set and Uploading It to the Camera To create a user shading set file and to upload it to the camera, you must take the steps listed below. We strongly recommend that you read through all of the steps and read all of the other information in this section before you attempt to do shading correction. The steps below are intended to give you the basic knowledge needed to create a shading set and to upload it to the camera.
Standard Features AW00049313000 and divide the result by 60, add the 60 pixel 1 values together and divide the result by 60, etc.) 8. For each line of the sensor’s pixels, calculate the multipliers: a. In the line with averaged pixel values, find the pixel that has the brightest value. b. For each pixel in the line with averaged pixel values, calculate the multiplier that would make the value of that pixel equal to the value of the brightest pixel. 9.
AW00049313000 Standard Features Shading Set File Format A shading set file that you create for upload to the camera must be a binary file. The file must include a header plus the shading correction data (the multipliers for each pixel).
Standard Features AW00049313000 11.8 Trigger Delay The trigger delay feature lets you specify a delay that will be applied between the receipt of a hardware acquisition start trigger or frame start trigger and it becoming effective. The trigger delay may be specified as a time interval in the range from 0 to 1000000 µs (equivalent to 1 s) or as a number of consecutive line start triggers where the maximum number depends on the camera model.
AW00049313000 Standard Features 11.9 Test Images All cameras include the ability to generate test images. Test images are used to check the camera’s basic functionality and its ability to transmit an image to the host PC. Test images can be used for service purposes and for failure diagnostics. When the camera is in test image mode, the optics, imaging sensor, and the ADCs are not used.
Standard Features AW00049313000 11.9.1 Test Images Available on All Cameras The test images described in this section are available on both monochrome and color cameras. Test Image 1 - Fixed Diagonal Gray Gradient (8 bit) The 8 bit fixed diagonal gray gradient test image is best suited for use when the camera is set for monochrome 8 bit output. The test image consists of fixed diagonal gray gradients ranging from 0 to 255.
AW00049313000 Standard Features Test Image 3 - Moving Diagonal Gray Gradient (12 bit) The 12 bit moving diagonal gray gradient test image is similar to test image 2, but it is a 12 bit pattern. The image moves by one pixel from right to left whenever a new frame acquisition is initiated. The test pattern uses a counter that increments by one for each new frame acquisition.
Standard Features AW00049313000 11.9.2 Test Images on Color Cameras The test images described in this section are available on color cameras only. Test Image 6 - Fixed Diagonal Color Gradient Feature Test (8 bit) As shown in Figure 65, the test image consists of alternating blue, green, and red diagonal color gradients.
AW00049313000 Standard Features 11.10 Device Information Parameters Each camera includes a set of "device information" parameters. These parameters provide some basic information about the camera. The device information parameters include: Device Vendor Name (read only) - contains the name of the camera’s vendor. This string will always indicate Basler as the vendor. Device Model Name (read only) - contains the model name of the camera, for example, ruL2048-30gm.
Standard Features AW00049313000 // Read the Device ID parameter Pylon::String_t deviceID = Camera.DeviceID.GetValue(); // Write and read the Device User ID Camera.DeviceUserID = "custom name"; Pylon::String_t deviceUserID = Camera.DeviceUserID.GetValue(); // Read the Sensor Width parameter int64_t sensorWidth = Camera.SensorWidth.GetValue(); // Read the Sensor Height parameter int64_t sensorHeight = Camera.SensorHeight.GetValue(); // Read the Max Width parameter int64_t maxWidth = Camera.WidthMax.
AW00049313000 Standard Features 11.11 Configuration Sets A configuration set is a group of values that contains all of the parameter settings needed to control the camera. There are three basic types of configuration sets: the active configuration set, the default configuration set, and user configuration sets. Active Configuration Set The active configuration set contains the camera’s current parameter settings and thus determines the camera’s performance, that is, what your image currently looks like.
Standard Features AW00049313000 Default Startup Set You can select the default configuration set or one of the user configuration sets stored in the camera’s non-volatile memory to be the "default startup set." The configuration set that you designate as the default startup set will be loaded into the active set whenever the camera starts up at power on or after a reset. Instructions for selecting the default startup set appear on the next page. 11.11.
AW00049313000 Standard Features 11.11.2 Loading a Saved Set or the Default Set into the Active Set If you have saved a configuration set into the camera’s non-volatile memory, you can load the saved set from the camera’s non-volatile memory into the camera’s active set. When you do this, the loaded set overwrites the parameters in the active set. Since the settings in the active set control the current operation of the camera, the settings from the loaded set will now be controlling the camera.
Standard Features 208 AW00049313000 Basler runner
AW00049313000 Chunk Features 12 Chunk Features This section provides detailed information about the chunk features available on each camera. 12.1 What are Chunk Features? In most cases, enabling a camera feature will simply change the behavior of the camera. The Test Image feature is a good example of this type of camera feature. When the Test Image feature is enabled, the camera outputs a test image rather than an acquired image. This type of feature is referred to as a "standard" feature.
Chunk Features AW00049313000 12.2 Making the "Chunk Mode" Active and Enabling the Extended Data Stamp Before you can use any of the camera’s "chunk" features, the "chunk mode" must be made active. Making the chunk mode active does two things: It automatically enables the Extended Frame Data chunk feature. It makes the camera’s other chunk features available to be enabled. To make the chunk mode active: Set the Chunk Mode Active parameter to true.
AW00049313000 Chunk Features The following code snippet illustrates using the pylon API to run the parser and retrieve the extended image data: // retrieve data from the extended frame data chunk IChunkParser &ChunkParser = *Camera.CreateChunkParser(); GrabResult Result; StreamGrabber.RetrieveResult( Result ); ChunkParser.AttachBuffer( (unsigned char*) Result.Buffer(), Result.GetPayloadSize() ); int64_t offsetX = Camera.ChunkOffsetX.GetValue(); int64_t width = Camera.ChunkWidth.
Chunk Features AW00049313000 12.3 Frame Counter The Frame Counter feature numbers frames sequentially as they are acquired. When the feature is enabled, a chunk is added to each completed frame containing the value of the counter. The frame counter is a 32 bit value. The counter starts at 0 and wraps back to 0 after it reaches its maximum. The counter increments by 1 for each acquired frame. Whenever the camera is powered off, the counter will reset to 0.
AW00049313000 Chunk Features StreamGrabber.RetrieveResult( Result ); ChunkParser.AttachBuffer( (unsigned char*) Result.Buffer(), Result.GetPayloadSize() ); int64_t frameCounter = Camera.ChunkFramecounter.GetValue(); For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference. You can also use the Basler pylon Viewer application to easily set the parameters. For more information about the pylon Viewer, see Section 3.1 on page 17.
Chunk Features AW00049313000 For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference. You can also use the Basler pylon Viewer application to easily set the parameters.
AW00049313000 Chunk Features 12.4 Time Stamp The Time Stamp feature adds a chunk to each acquired frame. The chunk contains a time stamp that was generated when the frame start trigger for the frame became valid. Note that when the camera is set for continuous acquisition mode with the frame start trigger set to off, the user is not required to apply frame start trigger signals to the camera. In this case, the camera will internally generate a signal that will be used for the stamp.
Chunk Features AW00049313000 StreamGrabber.RetrieveResult( Result ); ChunkParser.AttachBuffer( (unsigned char*) Result.Buffer(), Result.GetPayloadSize() ); int64_t timeStamp = Camera.ChunkTimestamp.GetValue(); For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference. You can also use the Basler pylon Viewer application to easily set the parameters. For more information about the pylon Viewer, see Section 3.1 on page 17.
AW00049313000 Chunk Features 12.5 Trigger Counters The camera has the following "trigger counters" available that can help you determine if you are triggering the camera correctly: the Line Trigger Ignored Counter the Frame Trigger Ignored Counter the Line Trigger End To End Counter the Frame Trigger Counter the Frames Per Trigger Counter When a counter is enabled, a chunk is added to each completed frame containing the value of the counter.
Chunk Features AW00049313000 Line Trigger End To End Counter The Line Trigger End to End Counter counts the number of line triggers received by the camera from the end of the previous frame acquisition to the end of the current frame acquisition. If you subtract the number of lines actually included in the current frame from the number of lines shown by this counter, it will tell you the number of line triggers that were received but not acted on during the frame end to frame end period.
AW00049313000 Chunk Features // enable the trigger counter chunks Camera.ChunkSelector.SetValue( ChunkSelector_LineTriggerIgnoredCounter ); Camera.ChunkEnable.SetValue( true ); Camera.ChunkSelector.SetValue( ChunkSelector_FrameTriggerIgnoredCounter ); Camera.ChunkEnable.SetValue( true ); Camera.ChunkSelector.SetValue( ChunkSelector_LineTriggerEndToEndCounter ); Camera.ChunkEnable.SetValue( true ); Camera.ChunkSelector.SetValue( ChunkSelector_FrameTriggerCounter ); Camera.ChunkEnable.
Chunk Features AW00049313000 12.6 Encoder Counter The encoder counter chunk indicates the value of the Shaft Encoder Module Counter parameter at the time of the occurrence of a frame trigger. When the encoder counter chunk is enabled, a chunk is added to each frame containing the value of the Shaft Encoder Module Counter parameter. The encoder counter chunk is a 16 bit value. The minimum value is 0 and the maximum is 32767. The Shaft Encoder Module Counter is part of the shaft encoder module.
AW00049313000 Chunk Features int64_t EncoderCounter = Camera.ChunkShaftEncoderCounter.GetValue(); For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference. You can also use the Basler pylon Viewer application to easily set the parameters.
Chunk Features AW00049313000 12.7 Input Line Status At Line Trigger The Input Status At Line Trigger feature samples the status of all of the camera’s input lines each time a line acquisition is triggered. It collects the input line status data for each acquired line in a chunk and adds the chunk to the frame that includes the acquired line. The input status at line trigger information is a 4 bit value.
AW00049313000 Chunk Features Camera.ChunkModeActive.SetValue(true); Camera.ChunkSelector.SetValue(ChunkSelector_InputStatusAtLineTrigger); Camera.ChunkEnable.SetValue(true); // grab image and feed it to the chunk parser ... int MaxIdx = int(Camera.ChunkInputStatusAtLineTriggerIndex.GetMax()); for (int i = 0; i <= MaxIdx; i++) { Camera.ChunkInputStatusAtLineTriggerIndex.SetValue(i); int value = int(Camera.ChunkInputStatusAtLineTriggerValue.
Chunk Features AW00049313000 12.8 CRC Checksum The CRC (Cyclic Redundancy Check) Checksum feature adds a chunk to each acquired frame containing a CRC checksum calculated using the X-modem method. As shown in Figure 69 on page 224, the checksum is calculated using all of the image data in the frame and all of the appended chunks except for the checksum itself. The CRC chunk is always the last chunk appended to the frame.
AW00049313000 Chunk Features // Make chunk mode active and enable CRC chunk Camera.ChunkModeActive.SetValue( true ); Camera.ChunkSelector.SetValue( ChunkSelector_PayloadCRC16 ); Camera.ChunkEnable.SetValue( true ); // Check the CRC checksum of an acquired frame IChunkParser &ChunkParser = *Camera.CreateChunkParser(); GrabResult Result; StreamGrabber.RetrieveResult( Result ); ChunkParser.AttachBuffer( (unsigned char*) Result.Buffer(), Result.GetPayloadSize() ); if ( ChunkParser.HasCRC() && ! ChunkParser.
Chunk Features 226 AW00049313000 Basler runner
AW00049313000 Troubleshooting and Support 13 Troubleshooting and Support This chapter outlines the resources available to you if you need help working with your camera. 13.1 Tech Support Resources If you need advice about your camera or if you need assistance troubleshooting a problem with your camera, you can contact the Basler technical support team for your area. Basler technical support contact information is located in the front pages of this manual.
Troubleshooting and Support AW00049313000 13.3 Before Contacting Basler Technical Support To help you as quickly and efficiently as possible when you have a problem with a Basler camera, it is important that you collect several pieces of information before you contact Basler technical support. Copy the form that appears on the next two pages, fill it out, and fax the pages to your local dealer or to your nearest Basler support center.
AW00049313000 7 How often did/does the problem occur? Troubleshooting and Support Once. Every time. Regularly when: Occasionally when: 8 How severe is the problem? Camera can still be used. Camera can be used after I take this action: Camera can no longer be used. 9 10 Did your application ever run without problems? Yes No Parameter set It is very important for Basler Technical Support to get a copy of the exact camera parameters that you were using when the problem occurred.
Troubleshooting and Support 230 AW00049313000 Basler runner
AW00049313000 Revision History Revision History Doc. ID Number Date Changes AW00049301000 16 Nov 2007 Initial release. Applies to color prototype cameras only. AW00049302000 26 Feb 2008 Primarily version. Applies to both mono and color cameras. AW00049303000 26 Mar 2008 Applies to series production cameras. AW00049304000 17 Apr 2008 Corrected the default for the Exposure Time Base Abs parameter stated in Section 8.2.5.2 on page 94.
Revision History AW00049313000 Doc. ID Number Date Changes AW00049310000 8 June 2010 Expanded and modified the sections added in the previous release. Added a note regarding focal flange distances to Figure 3 in Section 1.4.1 on page 7. Added a note about maximum Height parameter values in Section 8.1.1 on page 71 and Section 8.2.1 on page 77. Added the frame timeout and the frame timeout event in Section 8.2.3.4 on page 86 and Section 11.4 on page 186.
AW00049313000 Revision History Doc. ID Number Date Changes AW00049313000 7 Feb 2013 Indicated Basler AG as bearer of the copyright on the back of the front page. Updated the support email addresses in the "Contacting Basler Support Worldwide" section page 3. Updated Figure 4 on page 8. Removed the statement about the availability of a power supply and cable assembly from Basler in Section 7.4.1 on page 48. Corrected the twinning scheme in Figure 19 in Section 7.4.
Revision History 234 AW00049313000 Basler runner
AW00049313000 Index A acquisition frame count parameter ..........80 acquisition start overtrigger event .........186 acquisition start trigger ......................77, 78 acquisition status indicator ....................125 acquisition status parameter .................125 acquisition trigger wait signal ................126 API ...........................................................18 B bandwidth assigned parameter ...............32 bandwidth, managing ..............................33 bit depth ....
Index exposure start delay ................................91 exposure time maximum ..........................................93 minimum ...........................................93 setting ...............................................94 exposure time abs parameter .................95 exposure time control modes off ......................................................90 timed .................................................90 trigger width ......................................
AW00049313000 line start trigger ..................................77, 87 line status parameter ...............................68 line trigger end to end counter ...............217 line trigger ignored counter ....................217 line trigger wait signal ............................126 LUT (luminance lookup table) ...............189 LUT enable parameter ..........................191 LUT index parameter .............................191 LVTTL ......................................................
Index AW00049313000 S U saving parameter sets ...................205, 206 sensor architecture .................................40, 42 pixel size .....................................2, 3, 4 position accuracy ................................8 size .............................................2, 3, 4 type .............................................2, 3, 4 sensor height parameter .......................203 sensor width parameter ........................ 203 serial number .................................