Basler pioneer USER’S MANUAL FOR GigE VISION CAMERAS Document Number: AW000151 Version: 06 Language: 000 (English) Release Date: 20 September 2007
For customers in the U.S.A. This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to Part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference when the equipment is operated in a commercial environment. This equipment generates, uses, and can radiate radio frequency energy and, if not installed and used in accordance with the instruction manual, may cause harmful interference to radio communications.
Contacting Basler Support Worldwide Europe: Basler AG An der Strusbek 60 - 62 22926 Ahrensburg Germany Tel.: +49-4102-463-500 Fax.: +49-4102-463-599 vc.support.europe@baslerweb.com Americas: Basler, Inc. 855 Springdale Drive, Suite 160 Exton, PA 19341 U.S.A. Tel.: +1-877-934-8472 Fax.: +1-877-934-7608 vc.support.usa@baslerweb.com Asia: Basler Asia Pte. Ltd 8 Boon Lay Way # 03 - 03 Tradehub 21 Singapore 609964 Tel.: +65-6425-0472 Fax.: +65-6425-0473 vc.support.asia@baslerweb.com www.basler-vc.
Table of Contents Table of Contents 1 Specifications, Requirements, and Precautions . . . . . . . . . . . . . . . . . . . . . . . 1 1.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 General Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Spectral Response for Mono Cameras. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table of Contents 4 Network Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.1 Recommended Network Adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.2 Recommended Network Architectures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.2.1 Peer-to-peer Network Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.2.2 Connecting Via Network Switches . . .
Table of Contents 9.7 Input and Output Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.1 Input Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.2 Output Lines. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.7.3 Output Line Response Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Table of Contents 11.3 Pixel Data Output Formats for Color Cameras. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 The Bayer Color Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.1.1 Color Filter Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.2 Bayer GB 8 Format (Equivalent to DCAM Raw 8) . . . . . . . . . . . . . . . . . . . . . 11.3.3 Bayer GB 16 Format (Equivalent to DCAM Raw 16) . . . . . . . . . . . .
Table of Contents 14 Chunk Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 14.1 What are Chunk Features?. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 14.2 Making the "Chunk Mode" Active and Enabling the Extended Data Stamp . . . . . . . . 206 14.3 Frame Counter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208 14.4 Time Stamp . . . . . . .
Table of Contents vi Basler pioneer
Specifications, Requirements, and Precautions 1 Specifications, Requirements, and Precautions This section lists the camera models covered by the manual. It provides the general specifications for those models and the basic requirements for using them. This section also includes specific precautions that you should keep in mind when using the cameras. We strongly recommend that you read and follow the precautions. 1.
Specifications, Requirements, and Precautions 1.2 General Specifications Specification piA640-210gm/gc piA1000-48gm/gc piA1600-35gm/gc Sensor Size (H x V pixels) gm: 648 x 488 gm: 1004 x 1004 gm: 1608 x 1208 gc: gc: gc: 1604 x 1204 Sensor Type Kodak KAI-0340 646 x 486 1000 x 1000 Kodak KAI-1020 Kodak KAI-2020 Progressive scan CCD Optical Size 1/3" 2/3" 1" Pixel Size 7.4 µm x 7.4 µm 7.4 µm x 7.4 µm 7.4 µm x 7.4 µm Max.
Specifications, Requirements, and Precautions Specification piA1900-32gm/gc Sensor Size (H x V pixels) gm: 1928 x 1084 Sensor Type Kodak KAI-2093 gc: 1926 x 1082 Progressive scan CCD Optical Size 1" Pixel Size 7.4 µm x 7.4 µm Max.
Specifications, Requirements, and Precautions Specification piA2400-12gm/gc Sensor Size (H x V pixels) gm: 2456 x 2058 Sensor Type Sony ICX625ALA/AQA gc: 2454 x 2056 Progressive scan CCD Optical Size 2/3" Pixel Size 3.45 µm x 3.45 µm Max.
Specifications, Requirements, and Precautions 1.3 Spectral Response for Mono Cameras The following graphs show the spectral response for each available monochrome camera model. Note Absolute Quantum Efficiency The spectral response curves exclude lens characteristics and light source characteristics. Wave Length (nm) Fig.
Absolute Quantum Efficiency Specifications, Requirements, and Precautions Wave Length (nm) Absolute Quantum Efficiency Fig. 2: piA1000-48gm Spectral Response Wave Length (nm) Fig.
Absolute Quantum Efficiency Specifications, Requirements, and Precautions Wave Length (nm) Fig. 4: piA1900-32gm Spectral Response 1.0 0.9 0.8 Relative Response 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0.0 400 500 600 700 800 Wave Length (nm) 900 1000 Wave Length (nm) Fig.
Specifications, Requirements, and Precautions 1.4 Spectral Response for Color Cameras The following graphs show the spectral response for each available color camera model. Note The spectral response curves exclude lens characteristics, light source characteristics, and IR cut filter characteristics. To obtain best performance from color models of the camera, use of a dielectric IR cut filter is recommended. The filter should transmit in a range from 400 nm to 700 ...
Specifications, Requirements, and Precautions Blue Green Absolute Quantum Efficiency Red Wave Length (nm) Fig. 7: piA1000-48gc Spectral Response Blue Green Absolute Quantum Efficiency Red Wave Length (nm) Fig.
Specifications, Requirements, and Precautions Blue Green Absolute Quantum Efficiency Red Wave Length (nm) Fig. 9: piA1900-32gc Spectral Response 1.0 0.9 0.8 Relative Response 0.7 0.6 Blue 0.5 Green 0.4 Red 0.3 0.2 0.1 0.0 400 450 500 550 600 650 700 WaveLength Length (nm) Wave (nm) Fig.
Specifications, Requirements, and Precautions 1.5 Mechanical Specifications The camera housing conforms to protection class IP30 provided the lens mount is covered by a lens or by the cap that is shipped with the camera. 1.5.1 Camera Dimensions and Mounting Points The cameras are manufactured with high precision. Planar, parallel, and angular sides guarantee precise mounting with high repeatability. The camera’s dimensions in millimeters are as shown in the drawings below.
Specifications, Requirements, and Precautions 2 x M3; 4.5 deep Bottom Side 80.2 9.7 2 x M3; 4 deep 6.45 44 85.4 ca.13.5 86.7 32 98.5 17.526 2 x M3; 3.5 deep Photosensitive surface of the sensor Top Side 2 x M3; 4.5 deep 9.7 80.2 Fig.
Specifications, Requirements, and Precautions 1.5.2 Sensor Positioning Accuracy The sensor positioning accuracy is as shown in the drawings below. X ± 0.4 ± 0.25 Center lines of the sensor To the length of the housing Center lines of the thread = reference plane ± 0.02 (This is the sensor tilt tolerance. It applies to every point on the photosensitive surface and is relative to the center of the die.) (2:1) Photosensitive surface of the sensor +0 17.526 - 0.
Specifications, Requirements, and Precautions 1.5.3 Maximum Thread Length on Color Cameras The C-mount lens adapter on color models of the camera is normally equipped with an internal IR cut filter. As shown below, the length of the threads on any lens you use with a color camera must be less than 8.0 mm. If a lens with a longer thread length is used, the IR cut filter will be damaged or destroyed and the camera will no longer operate. < 8.0 mm Not to Scale C-mount Lens Lens Adapter IR Cut Filter Fig.
Specifications, Requirements, and Precautions 1.6 Environmental Requirements 1.6.1 Temperature and Humidity Housing temperature during operation: 0 °C ... +50 °C (+32 °F ... +122 °F) Humidity during operation: 20 % ... 80 %, relative, non-condensing Storage temperature: -20 °C ... +80 °C (-4 °F ... +176 °F) Storage humidity: 20 % ... 80 %, relative, non-condensing 1.6.
Specifications, Requirements, and Precautions 1.7 Precautions Avoid Dust on the Sensor CAUTION The camera is shipped with a cap on the lens mount. To avoid collecting dust on the camera’s IR cut filter (color cameras) or sensor (mono cameras), make sure that you always put the cap in place when there is no lens mounted on the camera. Lens Thread Length is Limited CAUTION Color models of the camera with a C-mount lens adapter are equipped with an IR cut filter mounted inside of the adapter.
Specifications, Requirements, and Precautions Warranty Precautions To ensure that your warranty remains in force: Do not remove the camera’s serial number label If the label is removed and the serial number can’t be read from the camera’s registers, the warranty is void. Do not open the camera housing Do not open the housing. Touching internal components may damage them.
Specifications, Requirements, and Precautions 18 Basler pioneer
Software and Hardware Installation 2 Software and Hardware Installation This section provides the information you will need to install and operate the camera. The installation procedure includes both software and hardware installation. The installation procedure in this section assumes that you want to get your camera operational and begin capturing images as quickly and as simply as possible.
Software and Hardware Installation 2.2 Software Installation 2.2.1 What You Must Know Before You Begin The software installation procedure assumes that you are installing version 1.0 of the pylon software. If you have an older version of the software, obtain version 1.0 before you begin the installation. If you have a newer version of the software, contact Basler technical support for assistance. The software installation procedure assumes that there are no GigE cameras connected to the computer.
Software and Hardware Installation more robust packet resend mechanism. The Basler pylon 1394 driver - a camera driver for IEEE 1394 cameras. The Basler pylon Direct Show driver - a driver for use in combination with applications that display images using the Windows Direct Show module. The IP Configuration Tool - a tool for changing the IP configuration of the camera. The Speed-O-Meter - a tool for monitoring the camera’s frame rate and its bandwidth usage.
Software and Hardware Installation 2.2.1.1 What Happens When Installing a Basler GigE Vision Network Driver During installation of the Basler pylon Software Development Kit or the Basler pylon Viewer, Basler network drivers are bound to all network adapters installed in your computer. This applies not only to all network adapters used to connect to cameras, but also to all other network adapters installed in your PC.
Software and Hardware Installation Note If you are using a compatible GigE network adapter, the installation of the Basler pylon Software Development Kit or the Basler pylon Viewer will remove the driver that is currently installed for the adapter and will install the Basler performance driver.
Software and Hardware Installation 2.2.2 Removing Older Pylon Software If you have version 0.9 or below of the Basler filter driver installed on your system, it is critical that you remove the older version of the filter driver. If you attempt to install a newer version of the filter driver with version 0.9 or below in place, the installation will fail without any type of service message. And if you install the performance driver on a system that has version 0.
Software and Hardware Installation 4. When the removal process is complete, click the Close button on the LAN Connection Properties window. 5. Close the Network Connections window. Filter driver removal is complete. 2.2.3 Installing the pylon SDK Note During installation of the Basler pylon SDK, the current network adapter-driver associations will be changed for all network drivers installed in your PC with Basler drivers replacing the current drivers.
Software and Hardware Installation 7. A Ready to Install the Program window will open. Note If applicable, close all applications needing network connection before proceeding. Otherwise, the network connections for your applications will temporarily be lost. Click the Install button. 8. If a warning message appears indicating that Windows logo testing was not passed, click Continue Anyway. This action may be required several times. 9.
Software and Hardware Installation 2.2.4 Installing the pylon Viewer Note During installation of the Basler pylon Viewer, the current network adapterdriver associations will be changed for all network drivers installed in your PC with Basler drivers replacing the current drivers. If you want to reestablish the original network adapter-driver association for a compatible GigE adapter, see the "Changing the Driver Association for a Compatible GigE Network Adapter" section.
Software and Hardware Installation 8. If a warning message appears indicating that Windows logo testing was not passed, click Continue Anyway. This action may be required several times. 9. When the installation process is complete, a Completed window will open. Click the Finish button. 10. An Installer Information window may open informing about the need to restart the computer. If you want to restart the computer now, click the Yes button. If you want to restart the computer later, click the No button.
Software and Hardware Installation 2.2.5 Adjusting the Installation This section provides information on adjustments that must be made after the installation of the Basler pylon Software Development Kit or the Basler pylon Viewer and on additional adjustments that may be required.
Software and Hardware Installation 2.2.5.1 Configuring a Non-compatible GigE or a Fast Ethernet Network Adapter Used with Your Cameras The following procedures assume that the Basler filter driver was installed on your PC during Installation of the Basler pylon SDK, during installation of the Basler pylon Viewer, or as an individual software component.
Software and Hardware Installation c. Click the Close or the OK button on the Local Area Connection Properties window (either a Close or an OK button will be present depending on what changes you made). 5. In the Network Connections window, right click on the name of the connection for your network adapter, and select Properties from the drop down menu. 6.
Software and Hardware Installation Go on to step 8. 8. Click on the Alternate Configuration tab. a. Make sure that the Automatic private IP address radio button is selected as shown below. b. Click the OK button on the Internet Protocol Properties window. c. Click the Close button on the Local Area Connection Properties window. 9.
Software and Hardware Installation 2.2.5.2 Changing the Driver Association for a Compatible GigE Network Adapter During installation of the Basler pylon SDK or the Basler pylon Viewer, the current network adapterdriver associations are changed for all network drivers installed in your PC with Basler drivers replacing the current drivers. If you want to reestablish a network adapter-driver association with the original driver, follow the procedures below.
Software and Hardware Installation 2.2.5.3 Unbinding the Basler Filter Driver from a Non-compatible GigE or a Fast Ethernet Network Adapter During installation of the Basler pylon SDK or the Basler pylon Viewer, the current network adapterdriver associations will be changed for all network drivers installed in your PC with Basler drivers replacing the current drivers. Your application may require that no Basler network driver is bound to an adapter that is not used for a camera.
Software and Hardware Installation 2.2.5.4 Installing a Software Component The procedure below describes how to install a Basler pylon software component that is not included in the current installation of Basler pylon software on your PC. You will have to install a Basler pylon software component in, for example, one of these situations: You initially only installed the Basler pylon Viewer and now want to use the SDK.
Software and Hardware Installation 5. A Customer Information window will open. Enter the appropriate information and click the Next button. 6. A Setup Type window will open. a. Choose Custom Installation. b. Deselect all components of the software that you do not want to install. c. Click the Next button 7. A Ready to Install the Program window will open. Note If applicable, close all applications needing network connection before proceeding.
Software and Hardware Installation 2.2.6 If You Are Using Multiple Network Adapters in a Single PC With a typical installation of GigE cameras, the default network adapter settings will cause each adapter to use what is known as Automatic Private IP Addressing (APIPA) to assign itself an IP address. If you are using only one network adapter in you host PC, these adapter settings will work correctly with Basler GigE cameras that are set to default (e.g., new cameras coming out of the box).
Software and Hardware Installation 2.2.7 Software Licensing Information LWIP TCP/IP Implementation The software in this camera includes the LWIP TCP/IP implementation. The copyright information for this implementation is as follows: Copyright (c) 2001, 2002 Swedish Institute of Computer Science. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1.
Software and Hardware Installation 2.3 Hardware Installation Voltage Outside of Specified Range Can Damage the Camera CAUTION If the voltage of the input power to the camera is greater than +24 VDC damage to the camera can result. If the voltage is less than 12 VDC, the camera may operate erratically. An Incorrect Plug Can Damage the 12-pin Connector CAUTION The plug on the cable that you attach to the camera’s 12-pin connector must have 12 pins.
Software and Hardware Installation 2.4 Acquiring Your First Images To adjust your camera’s settings and to acquire and view your first images, we suggest that you use the pylon Viewer software. The following steps assume that you are using the pylon Viewer. At this point, many of the camera’s settings will be preliminary. Information about how to improve your image quality and to make camera settings more suitable for your specific application appear later in this section of this manual.
Software and Hardware Installation 4. Click on your camera’s name in the device tree to select the camera. A camera properties pane will open in the viewer as shown below. You can use the selections in the camera properties pane to adjust all of the cameras settings such as gain, black level, and exposure. Notice the user level selector drop down box that now appears in the lower left corner of the properties pane. You can select the beginner, expert, or guru user level.
Software and Hardware Installation 7. Change the Black Level setting to a value between 16 and 32 and change the Gain setting to its lowest allowed value: a. In the Camera Properties Pane, click the + Sign beside your camera’s name. A list of setting categories will appear. b. Click the + Sign beside the Analog Controls category. The controls for setting Gain and Black Level will appear as shown below.
Software and Hardware Installation 8. Change the Shutter setting to its lowest allowed value: a. Click the + Sign beside the Acquisition Controls category. The controls for setting the Exposure Time will appear as shown below. b. Use the slider to set the Exposure Time to its lowest allowed value.
Software and Hardware Installation 9. Click the icon in the tool bar or click Camera in the menu bar and select Continuous Shot from the drop down menu. The camera will begin to acquire images continuously. The acquired images are displayed in an image display window as shown below. With the current camera settings, the images you are seeing may be very dark or perhaps even black. The following steps assume that the current images are too dark. 10.
Software and Hardware Installation 15. Adjust Exposure Time, Gain, and Black Level (in that order) to further improve the quality of the acquired images. 16. Stop image acquisition by clicking the icon or by clicking Camera in the menu bar and selecting Stop Grab from the drop down menu. 17.
Software and Hardware Installation 2.5 Adjusting Image Quality In the following descriptions, we will discuss image quality in terms of focus, depth of focus, brightness and contrast. You can adjust image quality with regard to these criteria by choosing appropriate settings. However, the "best" image quality will partly depend on the specific requirements of your application and therefore no generally applicable "best" setting can be recommended.
Software and Hardware Installation Brightness: Among the factors determining the brightness of an image are the intensity of the illumination, the setting of the lens aperture, and the settings for black level, exposure time, and gain. We recommend that you choose bright illumination if possible, but avoid excessive intensity. This will prevent you from needing to operate the camera using extreme camera settings.
Software and Hardware Installation Exposure Time: The exposure time setting determines the time interval during which the sensor is exposed to light. Choose an exposure time setting that takes account of whether you want to acquire images of still or moving objects: If the object is not moving, you can choose a high exposure time setting (i.e., a long exposure interval).
Software and Hardware Installation Note that it is not possible to improve the signal-to-noise ratio by increasing gain. Increasing the gain will increase both signal and noise in equal proportions. White Balance: If you are using a color camera, the object that you use when adjusting the white balance should be a uniform gray and should fill the camera’s entire field of view. Your white balance will be correct when images of this object show a uniform gray.
Software and Hardware Installation 2.6 Next Steps We assume that you have succeeded in acquiring images and controlling the camera using the pylon Viewer and that you were able to optimize the image quality. To meet the requirements of your application, you will likely need to make additional camera settings and to modify previous camera settings. See the sections in this manual describing camera operation and features for details about additional camera settings.
Tools for Changing Camera Parameters 3 Tools for Changing Camera Parameters This section explains the options available for changing the camera’s parameters. The available options let you change parameters either by using standalone tools that access the camera via a GUI or by accessing the camera from within your software application. 3.1 The pylon Viewer The Basler pylon Viewer is a standalone application that lets you view and change most of the camera’s parameter settings via a GUI based interface.
Tools for Changing Camera Parameters 3.3 The pylon API You can access all of the camera’s parameters and can control the camera’s full functionality from within your application software by using Basler’s pylon API. The Basler pylon Programmer’s Guide and API Reference contains an introduction to the API and includes information about all of the methods and objects included in the API.
Network Recommendations 4 Network Recommendations This section describes the recommended adapters and architectures for the network to which your cameras are attached. 4.1 Recommended Network Adapters The recommended network adapters for use with Basler GigE cameras are adapters of the Intel PRO 1000 series. The recommended adapters in the PRO 1000 series have been tested with the cameras and compatibility with these adapters is assured. These adapters are attractively priced and readily available.
Network Recommendations 4.2 Recommended Network Architectures 4.2.1 Peer-to-peer Network Architecture A strongly recommended network architecture is direct peer-to-peer connection between your cameras and your host PC. As shown in Figure 15, the cameras can be connected to individual gigabit network adapters in the host PC or to a multiport adapter in the PC.
Network Recommendations 4.2.2 Connecting Via Network Switches A second recommended network architecture involves connecting your cameras to the Gigabit Ethernet (GigE) adapter(s) in your host PC via network switches. As shown in Figure 16, the cameras can be connected to individual GigE network adapters in the host PC or to a multiport adapter in the PC. Note that the figure below only depicts simple schemes for connecting cameras to a PC via network switches.
Network Recommendations PC 1 Port GigE Adapter Single Path GigE Network Switch GigE Camera GigE Camera GigE Camera GigE Camera Fig. 17: Multiple Cameras Converging to a Single Path For more information about managing network bandwidth when using multiple cameras on a single network path, see Section 6 on page 73. Network Switch Issues When selecting GigE network switches for use in the type of network described above, there are several issues you must keep in mind.
Network Recommendations 4.3 PC Data Bus Issues If you are connecting multiple cameras to your host PC, either through direct peer-to-peer connections or through network switches, you must be aware of the data bus type used in the PC. The PCI bus typical of older PCs is theoretically capable of handling 132 MByte/s of data. But in practice, the capacity of the PCI bus is lower and the bus bandwidth is shared by the network adapter and many other devices installed in the PC.
Network Recommendations 58 Basler pioneer
Camera and Network Adapter IP Configuration 5 Camera and Network Adapter IP Configuration This section describes the default IP configuration for your network adapter and camera. It also describes how to change the IP configuration on your camera and on your network adapter. Note This section of the user’s manual assumes that you are familiar with basic Ethernet network concepts and with TCP/IP concepts such as IP addresses, subnet masks, and default gateways.
Camera and Network Adapter IP Configuration 5.1 Network Adapter IP Behavior The two most common ways to set the IP configuration on network adapters are: as "Fixed Address" as "DHCP / Alternate Configuration = APIPA (Automatic Private IP Addressing)" When an adapter is set for Fixed Address, it will simply use a fixed address that has been assigned to the adapter by the user.
Camera and Network Adapter IP Configuration 5.2 Changing a Network Adapter’s IP Configuration 5.2.1 Setting an Adapter to Use a Fixed IP Address You can configure a network adapter to use a fixed IP address by doing the following: 1. Open a Network Connections window. a. Find the connection for the adapter you want to configure. Right click on the name of the connection and select Properties from the drop down menu as shown below. 2. A Local Area Connection Properties window will open as shown below.
Camera and Network Adapter IP Configuration 3. An Internet Protocol (TCP/IP) Properties window will open and the General tab will be selected as shown below. 4. Click the radio button next to Use the following IP address. The window will change and will now allow you to enter IP address information. a. Enter your desired IP address and subnet mask. The figure below shows the window with typical values entered.
Camera and Network Adapter IP Configuration Note When you configure an adapter to use a fixed address, there are some things that you must keep in mind: If your PC has multiple network adapters, each adapter must be in a different subnet. The recommended range for fixed IP addresses is from 172.16.0.1 to 172.32.255.254 and from 192.168.0.1 to 192.168.255.254. These address ranges have been reserved for private use according to IP standards.
Camera and Network Adapter IP Configuration 5.2.2 Setting an Adapter to Use DHCP / Alternate Configuration = APIPA When a network adapter is set for DHCP / Alternate Configuration = APIPA, it will first try to find a DHCP server and to obtain an IP address from the server. If no DHCP server is available, the adapter will revert to the "alternate configuration.
Camera and Network Adapter IP Configuration 3. An Internet Protocol Properties window will open as shown below. a. On the General tab, make sure that the Obtain an IP address automatically radio button is selected. (This sets the adapter to check for a DHCP server as its first choice.) 4. Click on the Alternate Configuration tab. The settings on this tab are used to set the alternate configuration that the adapter will use if no DHCP server is found. a.
Camera and Network Adapter IP Configuration 5.2.3 Checking a Network Adapter’s IP Address You can check the current IP address of a network adapter by doing the following. 1. Open a Network Connections window. 2. Find the connection for the adapter you want to check and make sure that the status of the connection is shown as "connected" (if the status is "disconnected," this procedure will not work). Double click on the name of the connection. A LAN Connection Status window will open as shown below. 3.
Camera and Network Adapter IP Configuration 5.3 Camera IP Behavior When a camera is powered on or reset, it exhibits the following behavior when it tries to connect to an Ethernet network: It checks to see if the camera has been configured to use a persistent (fixed) IP address. If the camera has a persistent IP address configured, it will use the persistent IP address.
Camera and Network Adapter IP Configuration Note For auto IP assignment to work correctly, the network adapter that the camera is plugged into must also be set for auto IP assignment or it must be set for a fixed address in the auto IP address range. Note If you have multiple network adapters in your PC, only one adapter can be set to use auto IP assignment.
Camera and Network Adapter IP Configuration 5.4 Changing a Camera’s IP Configuration An application called the IP Configuration Tool is included as part of the pylon driver installation package. The IP Configuration Tool lets you make changes to the IP configuration of your camera. To start the IP Configuration Tool: Double click the pylon IP Configuration Tool icon on your desktop Or click Start, click All Programs, click Basler Vision Technologies, click Pylon x.
Camera and Network Adapter IP Configuration The IP Configuration Tool has two modes, List Mode and Edit mode: When the tool is first opened it is in List mode. In list mode, the tool can display a list of detected cameras, can display information about the camera selected in the list, and will let you assign a temporary IP address to the selected camera. When in list mode, the tool does not have a control channel open to any of the detected cameras.
Camera and Network Adapter IP Configuration Making Changes to the Camera’s IP Configuration You can use the IP Configuration Tool to make permanent changes to the camera’s IP configuration. (Permanent means that the changes will stay in place even when the camera is reset or switched off and back on.) If you want to change the IP configuration of the camera: 1. Click on the Change IP Configuration button. a.
Camera and Network Adapter IP Configuration 4. When you are finished making changes, click the Write Configuration button. A message will appear indicating that the camera is restarting its IP configuration cycle as shown below. When this message disappears the changes to the IP configuration will be in place and the IP Configurator window will now show the changed settings.
Basler Network Drivers and Parameters 6 Basler Network Drivers and Parameters This section describes the Basler network drivers available for your camera and provides detailed information about the parameters associated with the drivers. Two network drivers are available for the network adapter used with your GigE cameras: The Basler filter driver is a basic GigE Vision network driver that is compatible with all network adapters. The advantage of this driver is its extensive compatibility.
Basler Network Drivers and Parameters 6.1 The Basler Filter Driver The Basler filter driver is a basic driver GigE Vision network driver. It is designed to be compatible with most network adapter cards. The functionality of the filter driver is relatively simple. For each frame, the driver checks the order of the incoming packets. If the driver detects that a packet or a group of packets is missing, it will wait for a specified period of time to see if the missing packet or group of packets arrives.
Basler Network Drivers and Parameters For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference. You can also use the Basler pylon Viewer application to easily set the parameters. For more information about the pylon Viewer, see Section 2.2 on page 20 and Section 2.4 on page 40.
Basler Network Drivers and Parameters 6.2 The Basler Performance Driver The Basler performance driver is a hardware specific GigE Vision network driver compatible with network adapters that use specific Intel chipsets. The main advantage of the performance driver is that it significantly lowers the CPU load needed to service the network traffic between the PC and the camera(s). It also has a more robust packet resend mechanism. For more information about compatible Intel chipsets, see Section 4.
Basler Network Drivers and Parameters General Parameters Enable Resend - Enables the packet resend mechanisms. If the Enable Resend parameter is set to false, the resend mechanisms are disabled. The performance driver will not check for missing packets and will not send resend requests to the camera. If the Enable Resend parameter is set to true, the resend mechanisms are enabled. The performance driver will check for missing packets.
Basler Network Drivers and Parameters Resend Request Threshold - This parameter determines the location of the resend request threshold within the receive window as shown in Figure 18. The parameter value is in per cent of the width of the receive window. In Figure 18 the resend request threshold is set at 33.33% of the width of the receive window. A stream of packets advances packet by packet beyond the resend request threshold (i.e. to the left of the resend request threshold in Figure 18).
Basler Network Drivers and Parameters Timeout Resend Mechanism Parameters The timeout resend mechanism is illustrated in Figure 19 where the following assumptions are made: The frame includes 3000 packets. Packet 1002 is missing within the stream of packets and has not been recovered. Packets 2999 and 3000 are missing at the end of the stream of packets (end of the frame). The Maximum Number Resend Requests parameter is set to 3.
Basler Network Drivers and Parameters Maximum Number Resend Requests - The Maximum Number Resend Requests parameter sets the maximum number of resend requests the performance driver will send to the camera for each missing packet. Resend Timeout - The Resend Timeout parameter defines how long (in milliseconds) the performance driver will wait after detecting that a packet is missing before sending a resend request to the camera.
Basler Network Drivers and Parameters Threshold and Timeout Resend Mechanisms Combined Figure 20 illustrates the combined action of the threshold and the timeout resend mechanisms where the following assumptions are made: All parameters set to default. The frame includes 3000 packets. Packet 1002 is missing within the stream of packets and has not been recovered. Packets 2999 and 3000 are missing at the end of the stream of packets (end of the frame).
Basler Network Drivers and Parameters (9) Because the maximum number of resend requests has been sent and the last Resend Response Timeout interval has expired, packet 1002 is now considered as lost. (10) End of the frame. (11) Missing packets at the end of the frame (2999 and 3000). (12) Interval defined by the Packet Timeout parameter. You can set the performance driver parameter values from within your application software by using the pylon API.
Basler Network Drivers and Parameters For more information about the pylon Viewer, see Section 2.2 on page 20 and Section 2.4 on page 40. Adapter Properties When the Basler Performance driver is installed, it adds a set of "advanced" properties to the network adapter. These properties include: Max Packet Latency - A value in microseconds that defines how long the adapter will wait after it receives a packet before it generates a packet received interrupt.
Basler Network Drivers and Parameters 6.3 Transport Layer Parameters The transport layer parameters are part of the camera’s basic GigE implementation. These parameters do not normally require adjustment. Read Timeout - If a register read request is sent to the camera via the transport layer, this parameter designates the time out (in milliseconds) within which a response must be received.
Network Related Camera Parameters and Managing Bandwidth 7 Network Related Camera Parameters and Managing Bandwidth This section describes the camera parameters that are related to the camera’s performance on the network. It also describes how to use the parameters to manage the available network bandwidth when you are using multiple cameras. 7.
Network Related Camera Parameters and Managing Bandwidth The packet size parameter should always be set to the maximum size that your network adapter and network switches (if used) can handle. Inter-packet Delay (read/write) Sets the delay in ticks between the packets sent by the camera. Applies to the selected stream channel. Increasing the inter-packet delay will decrease the camera’s effective data transmission rate and will thus decrease the network bandwidth used by the camera.
Network Related Camera Parameters and Managing Bandwidth Bandwidth Reserve (read/write) Used to reserve a portion of the assigned bandwidth for packet resends and for the transmission of control data between the camera and the host PC. The setting is expressed as a percentage of the Bandwidth Assigned parameter. For example, if the Bandwidth Assigned parameter indicates that 30 MByte/s have been assigned to the camera and the Bandwidth Reserve parameter is set to 5%, then the bandwidth reserve will be 1.
Network Related Camera Parameters and Managing Bandwidth Time Time Period 1 2 3 4 5 6 FA&T FA&T FA&T FA&T FA&T FA&T Resends available via the bandwidth reserve 5 5 5 5 5 5 Resends needed 0 7 4 10 20 0 -2 +1 -5 15 13 14 9 Effect on the accumulator pool Resends left in the accumulator pool after frame transmission F A & T = Frame Acquired and Transmitted 7 8 9 FA&T FA&T 5 5 5 1 0 0 1 -9 +4 +5 +5 +1 0 4 9 14 15 Not enough resends available.
Network Related Camera Parameters and Managing Bandwidth (6) You trigger image acquisition and during this time period, the camera acquires and transmits a frame. The bandwidth reserve setting would allow 5 resends during this time period and 1 resend is needed. The 1 resend needed is taken from the resends available via the bandwidth reserve. The other 4 resends available via the bandwidth reserve are not needed, so they are added to the accumulator pool and they bring the pool up to 4.
Network Related Camera Parameters and Managing Bandwidth Device Current Throughput (read only) Indicates the actual bandwidth (in bytes per second) that the camera will use to transmit image data and chunk data given the current area of interest settings, chunk feature settings, and the pixel format setting. If the Acquisition Frame Rate abs parameter has been used to set the camera’s frame rate, the camera will use this frame rate setting to calculate the device current throughput.
Network Related Camera Parameters and Managing Bandwidth // Bandwidth Reserve Accumulation Camera.GevSCBWRA.SetValue( 10 ); // Frame Jitter Max int64_t jitterMax = Camera.GevSCFJM.GetValue(); // Device Max Throughput int64_t maxThroughput = Camera.GevSCDMT.GetValue(); // Device Current Throughput int64_t currentThroughput = Camera.GevSCDCT.GetValue(); // Resulting Framerate double resultingFps = Camera.ResultingFrameRateAbs.
Network Related Camera Parameters and Managing Bandwidth 7.2 Managing Bandwidth When Multiple Cameras Share a Single Network Path If you are using a single camera on a GigE network, the problem of managing bandwidth is simple. The network can easily handle the bandwidth needs of a single camera and no intervention is required. A more complicated situation arises if you have multiple cameras connected to a single network adapter as shown in Figure 21.
Network Related Camera Parameters and Managing Bandwidth transmission rate will be. After you have adjusted the Inter-packet Delay parameter on each camera, you can check the sum of the Bandwidth Assigned parameter values and see if the sum is now less than 125 MByte/s. 7.2.1 A Procedure for Managing Bandwidth In theory, managing bandwidth sharing among several cameras is as easy as adjusting the interpacket delay.
Network Related Camera Parameters and Managing Bandwidth Tip The manufacturer’s documentation sometimes makes it difficult to determine the maximum packet size for a device, especially network switches. There is a "quick and dirty" way to check the maximum packet size for your network with its current configuration: 1. Open the pylon Viewer, select a camera, and set the Packet Size parameter to a low value (1 kB for example). 2. Use the Continuous Shot mode to capture several images. 3.
Network Related Camera Parameters and Managing Bandwidth Where: Packet Overhead = 72 (for a GigE network) 78 (for a 100 MBit/s network) Leader Size = Packet Overhead + 36 (if chunk mode is not active) Packet Overhead + 12 (if chunk mode is active) Trailer Size = Packet Overhead + 8 ⎡ x ⎤ 1 means round up x to the nearest integer ⎡ x ⎤ 4 means round up x to the nearest multiple of 4 Step 4 - Calculate “data bandwidth assigned” to each camera. For each camera, there is a parameter called Bandwidth Assigned.
Network Related Camera Parameters and Managing Bandwidth Note If you increase the inter-packet delay to lower a camera’s data output rate there is something that you must keep in mind. When you lower the data output rate, you increase the amount of time that the camera needs to transmit an acquired frame (image). Increasing the frame transmission time can restrict the camera’s maximum allowed acquisition frame rate. Step 6 - Check that the total bandwidth assigned is less than the network capacity. 1.
Camera Functional Description 8 Camera Functional Description This section provides an overview of the camera’s functionality from a system perspective. The overview will aid your understanding when you read the more detailed information included in the next sections of the user’s manual. 8.1 Overview Each camera provides features such as a full frame shutter and electronic exposure time control.
Camera Functional Description The image buffer between the sensor and the Ethernet controller allows data to be read out of the sensor at a rate that is independent of the data transmission rate between the camera and the host computer. This ensures that the data transmission rate has no influence on image quality. Sensor Column Vert. Shift Pixels Reg. Sensor Center Line Column Vert. Shift Pixels Reg. Column Column Vert. Shift Pixels Reg. Vert. Shift Pixels Reg. Column Column Vert.
Camera Functional Description ExTrig 24 MB Image Buffer I/O ExpActive TrigRdy Image Data Sensor VGC ADC Image Data Ethernet Controller FPGA Image Data Image Data and Control Data Ethernet Network Control Control: AOI, Gain, Black Level MicroController Control Data Fig.
Camera Functional Description 100 Basler pioneer
Physical Interface 9 Physical Interface This section provides detailed information, such as pinouts and voltage requirements, for the physical interface on the camera. This information will be especially useful during your initial designin process. 9.1 General Description of the Connections The camera is interfaced to external circuity via connectors located on the back of the housing: An 8-pin, RJ-45 jack used to provide a 100/1000 Mbit/s Ethernet connection to the camera.
Physical Interface 9.2 Connector Pin Assignments and Numbering 9.2.1 12-Pin Receptacle Pin Assignments The 12 pin receptacle is used to access the two physical input lines and four physical output lines on the camera. It is also used to supply power to the camera. The pin assignments for the receptacle are shown in Table 4.
Physical Interface 9.2.2 RJ-45 Jack Pin Assignments The 8-pin RJ-45 jack provides Ethernet access to the camera. Pin assignments adhere to the Ethernet standard. 9.2.3 Pin Numbering 12 5 6 11 4 7 3 8 2 9 1 10 Fig.
Physical Interface 9.3 Connector Types 9.3.1 8-pin RJ-45 Jack The 8-pin jack for the camera’s Ethernet connection is a standard RJ-45 connector. The recommended mating connector is any standard 8-pin RJ-45 plug. Green and Yellow LEDs This RJ-45 jack on the camera includes a green LED and a yellow LED. When the green LED is lit, it indicates that an active network connection is available. When the yellow LED is lit, it indicates that data is being transmitted via the network connection. 9.3.
Physical Interface 9.4 Cabling Requirements 9.4.1 Ethernet Cables Use high-quality Ethernet cables. To avoid EMI, the cables must be shielded. Use of category 6 or category 7 cables with S/STP shielding is strongly recommended. As a general rule, applications with longer cables or applications in harsh EMI conditions require higher category cables. Either a straight-through (patch) or a cross-over Ethernet cable can be used to connect the camera directly to the PC or to a network switch. 9.4.
Physical Interface Fig. 26: Power and I/O Cable Note To avoid a voltage drop with long input power wires, we recommend that you supply input power VCC through two separate wires between the power supply and the camera as shown in the figure above. We also recommend that you supply input power ground through two separate wires between the power supply and the camera as shown in the figure.
Physical Interface 9.5 Input Power Input power must be supplied to the camera’s 12-pin connector via the power and I/O cable. Nominal input voltage is +12 VDC, however, the camera will operate properly on any input voltage between +12.0 VDC and +24.0 VDC with less than one percent ripple. Power consumption is as shown in the specification tables in Section 1 of this manual. .
Physical Interface 9.7 Input and Output Lines 9.7.1 Input Lines The camera is equipped with two physical input lines designated as Input Line 1 and Input Line 2. The input lines are accessed via the 12-pin receptacle on the back of the camera. As shown in the I/O line schematic, each input line is opto-isolated. For each line, the minimum input voltage to indicate a logical one is +5 VDC and the maximum is +24 VDC. An input voltage less than +5 VDC means a logical zero.
Physical Interface 9.7.2 Output Lines The camera is equipped with four physical output lines designated as Output Line 1, Output Line 2, Output Line 3, and Output Line 4. The output lines are accessed via the 12-pin receptacle on the back of the camera. As shown in the I/O schematic, each output line is opto-isolated. The minimum VCC that must be applied is +5 VDC and the maximum is +24 VDC. The maximum current allowed through an output circuit is 100 mA.
Physical Interface Figure 29 shows a typical circuit you can use to monitor an output line with an LED or an optocoupler. In this example, the voltage for the external circuit is +24 VDC. Current in the circuit is limited by an external resistor. The circuit in Figure 29 is monitoring output line 1. Fig. 29: Typical LED Output Signal By default, the camera’s exposure active (ExpAc) signal is assigned to Output Line 1. The exposure active signal indicates when exposure is taking place.
Physical Interface 9.7.3 Output Line Response Time Response times for the output lines on the camera are as shown below. Camera Output Signal TDR 90% Output Line Voltage TDF FT RT 90% Time Fig. 30: Output Line Response Times Time Delay Rise (TDR) = 1.5 µs Rise Time (RT) = 1.3 - 5.0 µs Time Delay Fall (TDF) = 1 - 20 µs Fall Time (FT) = 1 - 5 µs Note The response times for the output lines on your camera will fall into the ranges specified above.
Physical Interface Fig.
Image Acquisition Control 10 Image Acquisition Control This section provides detailed information about controlling image acquisition. You will find details about setting the exposure time for each acquired image and about how the camera’s maximum allowed acquisition frame rate can vary depending on the current camera settings. 10.
Image Acquisition Control 10.1.2 Acquiring One Image at a Time In “single frame” operation, the camera acquires and transmits a single image. To select single frame operation, the camera’s Acquisition Mode parameter must be set to Single Frame. To begin image acquisition, execute an Acquisition Start command. Exposure time is determined by the value of the camera’s exposure time parameter.
Image Acquisition Control // set camera in continous mode Camera.AcquisitionMode.SetValue( AcquisitionMode_Continuous ); // set a frame rate and getting the resulting frame rate Camera.AcquisitionFrameRateEnable.SetValue( true ); Camera.AcquisitionFrameRateAbs.SetValue( 20.5 ); double resultingFrameRate = Camera.ResultingFrameRateAbs.GetValue(); You can also execute the Acquisition Start and Stop commands by using the API.
Image Acquisition Control 10.2 Controlling Image Acquisition with a Software Trigger You can configure the camera so that image acquisition will be controlled by issuing a software trigger. The software trigger is issued by executing a Trigger Software command. Image acquisition starts when the Trigger Software command is executed. The exposure time for each image is determined by the value of the camera’s exposure time parameter. Figure 32 illustrates image acquisition with a software trigger.
Image Acquisition Control You can also use the Basler pylon Viewer application to easily set the parameters. For more information about the pylon Viewer, see Section 2.2 on page 20 and Section 2.4 on page 40. 10.2.2 Acquiring a Single Image by Applying One Software Trigger You can set the camera to react to a single software trigger and then issue a software trigger to begin image acquisition. To do so, follow this sequence: 1.
Image Acquisition Control 10.2.3 Acquiring Images by Applying a Series of Software Triggers You can set the camera to react to multiple applications of the software trigger and then apply a series of software triggers to acquire images. To do so, follow this sequence: 1. Access the camera’s API and set the exposure time parameter for your desired exposure time. 2. Set the value of the camera’s Acquisition Mode parameter to Continuous. 3. Execute an Acquisition Start command.
Image Acquisition Control double resultingFrameRate = Camera.ResultingFrameRateAbs.GetValue( ); // how to disable the FrameRateAbs parameter Camera.AcquisitionFrameRateEnable.SetValue( false ); For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference. You can also use the Basler pylon Viewer application to easily set the parameters. For more information about the pylon Viewer, see Section 2.2 on page 20 and Section 2.4 on page 40.
Image Acquisition Control 10.3 Controlling Image Acquisition with a Hardware Trigger You can configure the camera so that an external hardware trigger (ExTrig) signal applied to one of the input lines will control image acquisition. A rising edge or a falling edge of the ExTrig signal can be used to trigger image acquisition. The ExTrig signal can be periodic or non-periodic.
Image Acquisition Control starts when the ExTrig signal falls. Figure 33 illustrates timed exposure with the camera set for rising edge triggering. ExTrig Signal Period ExTrig Signal Exposure (duration determined by the exposure time parameter) Fig. 33: Timed Exposure with Rising Edge Triggering Trigger Width Exposure Mode When trigger width exposure mode is selected, the length of the exposure will be directly controlled by the ExTrig signal.
Image Acquisition Control expect exposure to take place only when the ExTrig signal is high. But since the signal falls while the previous frame is still reading out, the camera automatically extends exposure until the readout is complete. On the second cycle of the ExTrig signal shown in the figure, the signal rises during previous frame readout, but falls after the readout is complete. This is a normal situation and exposure would be determined by the high time of the ExTrig signal as you would expect.
Image Acquisition Control 10.3.2 Setting the Camera for Hardware Triggering To set the camera for hardware triggering: Use the Trigger Selector parameter to select the Acquisition Start trigger. Use the Trigger Mode parameter to set the trigger mode to On. Use the Trigger Source parameter to set the camera to accept the hardware trigger signal on input line 1 or on input line 2. Use the Trigger Activation parameter to set the camera for rising edge triggering or for falling edge triggering.
Image Acquisition Control 10.3.3 Acquiring a Single Image by Applying One Hardware Trigger Transition You can set the camera to react to a single transition of an external hardware trigger (ExTrig) signal and then you can transition the ExTrig signal to begin image acquisition. When you are using an ExTrig signal to start image acquisition, you should monitor the camera’s trigger ready (TrigRdy) output signal and you should base the use of your ExTrig signal on the state of the trigger ready signal.
Image Acquisition Control For more information about the pylon Viewer, see Section 2.2 on page 20 and Section 2.4 on page 40. For more information about the Trigger Ready signal, see Section 10.6 on page 132. For more information about the camera’s exposure time parameter, see Section 10.4 on page 127. 10.3.
Image Acquisition Control You should also be aware that if the Acquisition Frame Rate Abs parameter is enabled, it will influence the rate at which images can be acquired: If the Acquisition Frame Rate Abs parameter is set to a value less than the maximum allowed, you can trigger acquisition at any rate up to the set value.
Image Acquisition Control 10.4 Exposure Time Parameters Many of the camera’s image acquisition modes require you to specify an exposure time. There are two ways to set exposure time: by setting "raw" values or by setting an "absolute value". The two methods are described below. You can use whichever method you prefer to set the exposure time. The exposure time must not be set below a minimum specified value. The minimum exposure time for each camera model is shown in Table 5.
Image Acquisition Control Changing the Exposure Time Base By default, the Exposure Time Base is fixed at 20 µs, and the exposure time is normally adjusted by setting the value of the Exposure Time Raw parameter. However, if you require an exposure time that is longer than what you can achieve by changing the value of the Exposure Time Raw parameter alone, the Exposure Time Base Abs parameter can be used to change the exposure time base.
Image Acquisition Control You should also be aware that if you change the exposure time using the raw settings, the Exposure Time Abs parameter will automatically be updated to reflect the new exposure time. You can set the Exposure Time Abs parameter value from within your application software by using the pylon API. The following code snippet illustrates using the API to set the parameter value: Camera.ExposureTimeAbs.SetValue( 124 ); double resultingExpTime = Camera.ExposureTimeAbs.
Image Acquisition Control 10.5 Overlapping Exposure and Sensor Readout The image acquisition process on the camera includes two distinct parts. The first part is the exposure of the pixels in the imaging sensor. Once exposure is complete, the second part of the process – readout of the pixel values from the sensor – takes place. In regard to this image acquisition process, there are two common ways for the camera to operate: with “non-overlapped” exposure and with “overlapped” exposure.
Image Acquisition Control Determining whether your camera is operating with overlapped or non-overlapped exposures is not a matter of issuing a command or switching a setting on or off. Rather the way that you operate the camera will determine whether the exposures are overlapped or not overlapped.
Image Acquisition Control 10.6 Trigger Ready Signal As described in the previous section, the cameras can operate in an “overlapped” acquisition fashion. When the camera is operated in this manner, it is especially important that: the exposure time of a new image acquisition not start until exposure of the previously acquired image is complete, and the exposure time of a new image acquisition not end until readout of the previously acquired image is complete.
Image Acquisition Control You should be aware that if the Acquisition Frame Rate Abs parameter is enabled, the operation of the trigger ready signal will be influenced by the value of the parameter: If the value of the parameter is greater than zero but less than the maximum allowed, the trigger ready will go high at the rate specified by the parameter value. For example, if the parameter is set to 10, the trigger ready signal will go high 10 times per second.
Image Acquisition Control 10.7 Exposure Active Signal The camera’s “exposure active” (ExpAc) signal goes high when the exposure time for each image acquisition begins and goes low when the exposure time ends as shown in Figure 39. This signal can be used as a flash trigger and is also useful when you are operating a system where either the camera or the object being imaged is movable.
Image Acquisition Control 10.8 Acquisition Timing Chart Figure 40 shows a timing chart for image acquisition and transmission. The chart assumes that exposure is triggered by an ExTrig signal with rising edge activation and that the camera is set for the timed exposure mode. As Figure 40 shows, there is a slight delay between the rise of the ExTrig signal and the start of exposure.
Image Acquisition Control TrigRdy Signal ExTrig Signal Exposure Frame Readout Exposure Start Delay Exposure Frame N Exposure Start Delay Exposure Frame N+1 Frame N Readout to the Image Buffer Transmission Start Delay Frame Transmission Frame N Transmission to Host PC Exposure Frame N+2 Frame N+1 Readout to the Image Buffer Transmission Start Delay Frame N+1 Transmission to Host PC Timing charts are not drawn to scale Fig.
Image Acquisition Control 10.9 Maximum Allowed Acquisition Frame Rate In general, the maximum allowed acquisition frame rate for your camera can be limited by three factors: The amount of time it takes to read the data for an acquired image (known as a frame) from the image sensor to the frame buffer. This time varies depending on the height of the frame. Shorter frames take less time to read out of the sensor. The frame height is determined by the camera’s AOI settings.
Image Acquisition Control For more information about AOI settings, see Section 13.5 on page 185. For more information on the settings that determine the bandwidth assigned to the camera, see Section 7.2 on page 92. Formula 1: Calculates the maximum frame rate based on the sensor readout time: 1 Max. Frames/s = --------------------------------------------------------------[ AOI Height × C1 ] + C 2 Where: AOI Height = the height of the acquired frames as determined by the AOI settings.
Image Acquisition Control For more information about setting the exposure time, see Section 10.4 on page 127. Formula 3: Calculates the maximum frame rate based on the frame transmission time: Device Current Throughput Parameter Value Max.
Image Acquisition Control 140 Basler pioneer
Pixel Data Formats 11 Pixel Data Formats By selecting a pixel data format, you determine the format (layout) of the image data transmitted by the camera. This section provides detailed information about the available pixel data formats. 11.1 Setting the Pixel Data Format The setting for the camera’s Pixel Format parameter determines the format of the pixel data that will be output from the camera. The available pixel formats depend on the camera model and whether the camera is monochrome or color.
Pixel Data Formats You can set the Pixel Format parameter value from within your application software by using the pylon API. The following code snippet illustrates using the API to set the parameter value: Camera.PixelFormat.SetValue( PixelFormat_Mono8 ); Camera.PixelFormat.SetValue( PixelFormat_Mono12Packed ); Camera.PixelFormat.SetValue( PixelFormat_Mono16 ); Camera.PixelFormat.SetValue( PixelFormat_YUV422Packed ); Camera.PixelFormat.SetValue( PixelFormat_YUV422_YUYV_Packed ); Camera.PixelFormat.
Pixel Data Formats 11.2 Pixel Data Formats for Mono Cameras 11.2.1 Mono 8 Format (Equivalent to DCAM Mono 8) When a monochrome camera is set for the Mono 8 pixel data format, it outputs 8 bits of brightness data per pixel. The table below describes how the pixel data for a received frame will be ordered in the image buffer in your PC when the camera is set for Mono8 output.
Pixel Data Formats 11.2.2 Mono 16 Format (Equivalent to DCAM Mono 16) When a monochrome camera is set for the Mono16 pixel data format, it outputs 16 bits of brightness data per pixel with 12 bits effective. The 12 bits of effective pixel data fill from the least significant bit. The four unused most significant bits are filled with zeros. The table below describes how the pixel data for a received frame will be ordered in the image buffer in your PC when the camera is set for Mono16 output.
Pixel Data Formats When the camera is set for Mono 16, the pixel data output is 16 bit data of the “unsigned short (little endian)” type. The available range of data values and the corresponding indicated signal levels are as shown in the table below. Note that for 16 bit data, you might expect a value range from 0x0000 to 0xFFFF. However, with the camera set for Mono16 only 12 bits of the 16 bits transmitted are effective.
Pixel Data Formats 11.2.3 Mono 12 Packed Format When a monochrome camera is set for the Mono 12 Packed pixel data format, it outputs 12 bits of brightness data per pixel. Every three bytes transmitted by the camera contain data for two pixels. The table below describes how the pixel data for a received frame will be ordered in the image buffer in your PC when the camera is set for Mono 12 Packed output.
Pixel Data Formats When a monochrome camera is set for Mono 12 Packed, the pixel data output is 12 bit data of the “unsigned” type. The available range of data values and the corresponding indicated signal levels are as shown in the table below.
Pixel Data Formats 11.2.4 YUV 4:2:2 Packed Format (Equivalent to DCAM YUV 4:2:2) When a monochrome camera is set for the YUV 4:2:2 Packed pixel data format, the camera transmits Y, U, and V values in a fashion that mimics the output from a color camera set for YUV 4:2:2 Packed. The Y value transmitted for each pixel is an actual 8 bit brightness value similar to the pixel data transmitted when a monochrome camera is set for Mono 8. The U and V values transmitted will always be zero.
Pixel Data Formats 11.3 Pixel Data Output Formats for Color Cameras 11.3.1 The Bayer Color Filter The sensor used in color models of the camera is equipped with an additive color separation filter known as a Bayer filter. The pixel data output formats available on color cameras are related to the Bayer pattern, so you need a basic knowledge of the Bayer filter to understand the pixel formats.
Pixel Data Formats 11.3.1.1 Color Filter Alignment The alignment of the Bayer filter to the pixels in the images acquired by color cameras depends on the camera model. Table 9 shows the filter alignment for each available camera model.
Pixel Data Formats 11.3.2 Bayer GB 8 Format (Equivalent to DCAM Raw 8) When a color camera is set for the Bayer GB 8 pixel data format, it outputs 8 bits of data per pixel and the pixel data is not processed or interpolated in any way. So, for each pixel covered with a red lens, you get 8 bits of red data. For each pixel covered with a green lens, you get 8 bits of green data. And for each pixel covered with a blue lens, you get 8 bits of blue data.
Pixel Data Formats With the camera set for Bayer GB 8, the pixel data output is 8 bit data of the “unsigned char” type. The available range of data values and the corresponding indicated signal levels are as shown in the table below.
Pixel Data Formats 11.3.3 Bayer GB 16 Format (Equivalent to DCAM Raw 16) When a color camera is set for the Bayer GB 16 pixel data format, it outputs 16 bits of data per pixel with 12 bits effective. The 12 bits of effective pixel data fill from the least significant bit. The four unused most significant bits are filled with zeros. With the Bayer GB 16 the pixel data is not processed or interpolated in any way. So, for each pixel covered with a red lens, you get 12 effective bits of red data.
Pixel Data Formats Bm-5 Low byte of blue value for Pn-2 Bm-5 Low byte of green value for Pn-2 Bm-4 High byte of blue value for Pn-2 Bm-4 High byte of green value for Pn-2 Bm-3 Low byte of green value for Pn-1 Bm-3 Low byte of red value for Pn-1 Bm-2 High byte of green value for Pn-1 Bm-2 High byte of red value for Pn-1 Bm-1 Low byte of blue value for Pn Bm-1 Low byte of green value for Pn Bm High byte of blue value for Pn Bm High byte of green value for Pn When the camera is set for
Pixel Data Formats 11.3.4 Bayer GB 12 Packed Format When a color camera is set for the Bayer GB 12 Packed pixel dataformat, it outputs 12 bits of data per pixel. Every three bytes transmitted by the camera contain data for two pixels. With the Bayer GB 12 Packed coding, the pixel data is not processed or interpolated in any way. So, for each pixel covered with a red lens in the sensor’s Bayer filter, you get 12 bits of red data.
Pixel Data Formats Odd Lines Byte Data B0 Red value for P0 bits 11 ... 4 B1 Green value for P1 bits 3 ... 0 B2 Green value for P1 bits 11 ... 4 B3 Red value for P2 bits 11 ... 4 B4 Green value for P3 bits 3 ... 0 B5 Green value for P3 bits 11 ... 4 B6 Red value for P4 bits 11 ... 4 B7 Green value for P5 bits 3 ... 0 B8 Green value for P5 bits 11 ... 4 • • • • • • • • • • Red value for P0 bits 3 ... 0 Red value for P2 bits 3 ... 0 Red value for P4 bits 3 ...
Pixel Data Formats 11.3.5 YUV 4:2:2 Packed Format (Equivalent to DCAM YUV 4:2:2) When a color camera is set for the YUV 422 Packed pixel data format, each pixel in the captured image goes through a two step conversion process as it exits the sensor and passes through the camera’s electronics. This process yields Y, U, and V color information for each pixel. In the first step of the process, an interpolation algorithm is performed to get full RGB data for each pixel.
Pixel Data Formats The table below describes how the pixel data for a received frame will be ordered in the image buffer in your PC when the camera is set for YUV 4:2:2 Packed output.
Pixel Data Formats When the camera is set for YUV 4:2:2 Packed output, the pixel data output for the Y component is 8 bit data of the “unsigned char” type. The range of data values for the Y component and the corresponding indicated signal levels are shown below. This Data Value (Hexadecimal) Indicates This Signal Level (Decimal) 0xFF 255 0xFE 254 • • • • • • 0x01 1 0x00 0 The pixel data output for the U component or the V component is 8 bit data of the “straight binary” type.
Pixel Data Formats 11.3.6 YUV 4:2:2 (YUYV) Packed Format On color cameras, the YUV 4:2:2 (YUYV) packed pixel data format is similar to the YUV 4:2:2 pixel format described in the previous section. The only difference is the order of the bytes transmitted to the host PC. With the YUV 4:2:2 format, the bytes are ordered as specified in the DCAM standard issued by the 1394 Trade Association.
Pixel Data Formats When a color camera is set for YUV 4:2:2 (YUYV) output, the pixel data output for the Y component is 8 bit data of the “unsigned char” type. The range of data values for the Y component and the corresponding indicated signal levels are shown below. This Data Value (Hexadecimal) Indicates This Signal Level (Decimal) 0xFF 255 0xFE 254 • • • • • • 0x01 1 0x00 0 The pixel data output for the U component or the V component is 8 bit data of the “straight binary” type.
Pixel Data Formats 11.3.7 Mono 8 Format (Equivalent to DCAM Mono 8) When a color camera is set for the Mono 8 pixel data format, the pixel values in each captured image are first interpolated and converted to the YUV color model as described for the YUV 4:2:2 Packed format. The camera then transmits the 8 bit Y value for each pixel to the host PC. In the YUV color model, the Y component for each pixel represents a brightness value.
Pixel Data Formats This Data Value (Hexadecimal) Indicates This Signal Level (Decimal) 0xFF 255 0xFE 254 • • • • • • 0x01 1 0x00 0 Basler pioneer 163
Pixel Data Formats 11.4 Pixel Transmission Sequence For each captured image, pixel data is transmitted from the camera in the following sequence: Row 0 Col 0, Row 0 Col 1, Row 0 Col 2 .. .. Row 0 Col m-2, Row 0 Col m-1, Row 0 Col m Row 1 Col 0, Row 1 Col 1, Row 1 Col 2 .. .. Row 1 Col m-2, Row 1 Col m-1, Row 1 Col m Row 2 Col 0, Row 2 Col 1, Row 2 Col 2 .. .. Row 2 Col m-2, Row 2 Col m-1, Row 2 Col m : : : : : : : : : : : : Row n-2 Col 0, Row n-2 Col 1, Row n-2 Col 2 ..
I/O Control 12 I/O Control This section describes how to configure the camera’s two physical input lines and four physical output lines. It also provides information about monitoring the state of the input and output lines. For more detailed information about the physical and electrical characteristics of the input and output lines, see Section 9.7 on page 108. 12.1 Configuring Input Lines 12.1.
I/O Control 12.1.2 Using an Unassigned Input Line to Receive a User Input Signal You can use an unassigned input line to receive your own, user-generated input signal. The electrical characteristics of your input signal must meet the requirements shown in the Physical Interface section of this manual. You can use the Line Status or Line Status All parameters to monitor the state of the input line that is receiving the user-defined signal.
I/O Control 12.2 Configuring Output Lines 12.2.1 Assigning a Camera Output Signal to a Physical Output Line You can use the camera’s output signal assignment capability to assign one of the camera’s standard output signals as the source signal for a physical output line. The camera has a variety of standard output signals available including: Exposure Active Trigger Ready Timer 1, Timer 2, Timer 3, Timer 4 You can also designate an output line as "user settable".
I/O Control For more information about working with the timer output signals, see Section 12.2.4 on page 170 For more information about the exposure active signal, see Section 10.7 on page 134. For more information about the trigger ready signal, see Section 10.6 on page 132. 12.2.2 Setting the State of User Settable Output Lines As mentioned in the previous section, you can designate one or more of the user output lines as "user settable".
I/O Control To set the state of multiple user settable output lines: Use the User Output Value All parameter to set the state of multiple user settable outputs. You can set the User Output Value All parameter from within your application software by using the pylon API. The following code snippet illustrates using the API to set the parameter: Camera.UserOutputValueAll.SetValue( 0x3 ); int64_t currentOutputState = Camera.UserOutputValueAll.
I/O Control 12.2.4 Working with Timers The camera has four timer output signals available: Timer 1, Timer 2, Timer 3, and Timer 4. As shown in Figure 43, each timer works as follows: A trigger source event occurs that starts the timer. A delay period begins to expire. When the delay expires, the timer signal goes high and a duration period begins to expire. When the duration period expires, the timer signal goes low. Duration Delay Trigger source event occurs Fig.
I/O Control 12.2.4.2 Setting a Timer Delay Time There are two ways to set the delay time for a timer: by setting "raw" values or by setting an "absolute value". You can use whichever method you prefer to set the delay time. Setting the Delay with Raw Values When the delay time for a timer is set using "raw" values, the delay time will be determined by a combination of two elements. The first element is the value of the Timer Delay Raw parameter, and the second element is the Timer Delay Time Base.
I/O Control Setting the Delay with an Absolute Value You can also set the Timer delay by using an "absolute" value. This is accomplished by setting the Timer Delay Abs parameter. The units for setting this parameter are µs and the value can be set in increments of 1 µs. To set the delay for a timer using an absolute value: Use the Timer Selector to select a timer. Set the value of the Timer Delay Abs parameter.
I/O Control The Timer Duration Raw parameter value can range from 1 to 4095. So if the value is set to 100, for example, the timer duration will be 100 x 1 µs or 100 µs. To set the duration for a timer: Use the Timer Selector to select a timer. Set the value of the Timer Duration Raw parameter. You can set the Timer Selector and the Timer Duration Raw parameter value from within your application software by using the pylon API.
I/O Control When you use the Timer Duration Abs parameter to set the duration time, the camera accomplishes the setting change by automatically changing the Timer Duration Raw parameter to achieve the value specified by the Timer Duration Abs setting. This leads to a limitation that you must keep in mind if you use Timer Duration Abs parameter to set the duration time.
I/O Control 12.3 Checking the State of the I/O Lines 12.3.1 Checking the State of a Single Output Line You can determine the current state of an individual output line. To check the state of a line: Use the Line Selector parameter to select an output line. Read the value of the Line Status parameter to determine the current state of the selected line. A value of true means the line’s state is currently high and a value of false means the line’s state is currently low.
I/O Control the state of the associated line is currently low. If a bit is 1, it indicates that the state of the associated line is current high. Indicates output line 4 state Indicates output line 3 state Indicates output line 2 state Indicates output line 1 state Indicates input line 2 state Indicates input line 1 state Fig.
Standard Features 13 Standard Features This section provides detailed information about the standard features available on each camera. It also includes an explanation of the operation and the parameters associated with each feature. 13.1 Gain The camera’s gain is adjustable. As shown in Figure 45, increasing the gain increases the slope of the response curve for the camera. This results in a higher gray value output from the camera for a given amount of output from the imaging sensor.
Standard Features .
Standard Features The sum of the Gain Raw All setting plus the Gain Raw Tap 1 setting must be between 0 and 400 (inclusive). The sum of the Gain Raw All setting plus the Gain Raw Tap 2 setting must be between 0 and 400 (inclusive). For normal operation, we recommend that you set the value of Gain Raw Tap 1 and Gain Raw Tap 2 to zero and that you simply use Gain Raw All to set the gain.
Standard Features Setting the Gain To set the Gain Raw All value: Set the Gain Selector to All. Set the Gain Raw parameter to your desired value. To set the Gain Raw Tap 1 value: Set the Gain Selector to Tap 1. Set the Gain Raw parameter to your desired value. To set the Gain Raw Tap 2 value: Set the Gain Selector to Tap 2. Set the Gain Raw parameter to your desired value.
Standard Features 13.2 Black Level Adjusting the camera’s black level will result in an offset to the pixel values output from the camera. As mentioned in the "Functional Description" section of this manual, for readout purposes, the sensor used in the camera is divided into two halves. As a result of this design, there are three black level adjustments available: Black Level Raw All, Black Level Raw Tap 1, and Black Level Raw Tap 2. Black Level Raw All is a global adjustment, i.e.
Standard Features Setting the Black Level To set the Black Level Raw All value: Set the Black Level Selector to All. Set the Black Level Raw parameter to your desired value. To set the Black Level Raw Tap 1 value: Set the Black Level Selector to Tap 1. Set the Black Level Raw parameter to your desired value. To set the Black Level Raw Tap 2 value: Set the Black Level Selector to Tap 2. Set the Black Level Raw parameter to your desired value.
Standard Features 13.3 White Balance (on Color Models) White balance capability has been implemented on color models of the camera. White balancing can be used to adjust the color balance of the images transmitted from the camera. Setting the White Balance With the white balancing scheme used on these cameras, the red intensity, green intensity, and blue intensity can each be adjusted. For each color, a Balance Ratio parameter is used to set the intensity of the color.
Standard Features 13.4 Integrated IR Cut Filter (on Color Models) Color models of the camera that have a C-mount lens adapter are equipped with an IR cut filter as standard equipment. The filter is mounted inside of the lens adapter. Cameras without an IR cut filter are available on request. Monochrome cameras do not include an IR cut filter in the lens adapter. Monochrome cameras with a C-mount lens adapter can be equipped with a filter on request.
Standard Features 13.5 Area of Interest (AOI) The area of interest (AOI) feature lets you specify a portion of the imaging sensor array and after each image is acquired, only the pixel information from the specified portion of the array is transmitted to the host PC. The area of interest is referenced to the top left corner of the array. The top left corner is designated as column 0 and line 0 as shown in Figure 46.
Standard Features Setting the AOI By default, the AOI is set to use the full resolution of the camera’s sensor. You can change the size and the position of the AOI by changing the value of the camera’s X Offset, Y Offset, Width, and Height parameters. The value of the X Offset parameter determines the starting column for the area of interest. The value of the Y Offset parameter determines the starting line for the area of interest.
Standard Features Camera.Height.SetValue( 200 ); Camera.OffsetY.SetValue( 100 ); For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference. You can also use the Basler pylon Viewer application to easily set the parameters. For more information about the pylon Viewer, see Section 2.2 on page 20and Section 2.4 on page 40. 13.5.
Standard Features 13.6 Binning Note The binning feature is only available on the monochrome cameras. Binning increases the camera’s response to light by summing the charges from adjacent pixels into one pixel. Two types of binning are available: vertical binning and horizontal binning. With vertical binning, adjacent pixels from 2 lines, 3 lines, or a maximum of 4 lines in the imaging sensor array are summed and are reported out of the camera as a single pixel. Figure 47 illustrates vertical binning.
Standard Features The availability of binning differs between the camera models: Camera Model Vertical Binning Horizontal Binning piA640-210gm by 2, 3, or 4 by 2, 3, or 4 piA1000-48gm not available not available piA1600-35gm by 2, 3, or 4 by 2, 3, or 4 piA1900-32gm by 2 by 2 piA2400-12gm by 2, 3, or 4 by 2, 3, or 4 You can combine vertical and horizontal binning. This, however, may cause objects to appear distorted in the image.
Standard Features 13.6.1 Considerations When Using Binning Increased Response to Light Using binning can greatly increase the camera’s response to light. When binning is enabled, acquired images may look overexposed. If this is the case, you can reduce the lens aperture, reduce the intensity of your illumination, reduce the camera’s exposure time setting, or reduce the camera’s gain setting. Reduced Resolution Using binning effectively reduces the resolution of the camera’s imaging sensor.
Standard Features Binning’s Effect on the Sensor Readout and Frame Rate Formulas In several areas of the manual, formulas appear for sensor readout time and for calculating the maximum frame rate. In several of these formulas, you must enter the current height of the area of interest (AOI). If you are not using binning, you would enter the height of the AOI in physical sensor lines.
Standard Features 13.7 Luminance Lookup Table The camera can capture pixel values at a 12 bit depth. When a monochrome camera is set for the Mono 16 or Mono 12 packed pixel format, the camera outputs 12 effective bits. Normally, the 12 effective bits directly represent the 12 bit output from the camera’s ADC. The luminance lookup table feature lets you use a custom 12 bit to 12 bit lookup table to map the 12 bit output from the ADC to 12 bit values of your choice.
Standard Features 4095 12 Bit Camera Output 3072 2048 1024 0 0 1024 2048 3072 4095 12 Bit Sensor Reading Fig. 49: Lookup Table with Values Mapped in a Linear Fashion 4095 12 Bit Camera Output 3072 2048 1024 0 0 1024 2048 3072 4095 12 Bit Sensor Reading Fig.
Standard Features Using the Luminance Lookup Table to Get 8 Bit Output As mentioned above, when the camera is set for a pixel format where it outputs 12 effective bits, the lookup table is used to perform a 12 bit to 12 bit conversion. But the lookup table can also be used in 12 bit to 8 bit fashion. To use the table in 12 bit to 8 bit fashion, you enter 12 bit values into the table and enable the table as you normally would.
Standard Features 13.8 Event Reporting Event reporting is available on the camera. With event reporting, the camera can generate an "event" and transmit it to the PC whenever a specific situation has occurred.
Standard Features However if you are operating the camera at high frame rates with a small AOI, the camera may be able to generate and queue events faster than they can be transmitted and acknowledged. In this case: 1. The queue will fill and events will be dropped. 2. An event overrun will occur. 3. Assuming that you have event overrun reporting enabled, the camera will generate an "event overrun event" and place it in the queue. 4.
Standard Features 13.9 Test Images All cameras include the ability to generate test images. Test images are used to check the camera’s basic functionality and its ability to transmit an image to the host PC. Test images can be used for service purposes and for failure diagnostics. For test images, the image is generated internally by the camera’s logic and does not use the optics, the imaging sensor, or the ADC. Six test images are available.
Standard Features Test Image 1 - Fixed Diagonal Gray Gradient (8 bit) The 8 bit fixed diagonal gray gradient test image is best suited for use when the camera is set for monochrome 8 bit output. The test image consists of fixed diagonal gray gradients ranging from 0 to 255. If the camera is set for 8 bit output and is operating at full resolution, test image one will look similar to Figure 51. The mathematical expression for this test image: Gray Value = [column number + row number] MOD 256 Fig.
Standard Features Test Image 4 - Moving Diagonal Gray Gradient Feature Test (8 bit) The basic appearance of test image 4 is similar to test image 2 (the 8 bit moving diagonal gray gradient image). The difference between test image 4 and test image 2 is this: if a camera feature that involves digital processing is enabled, test image 4 will show the effects of the feature while test image 2 will not.
Standard Features 13.10 Device Information Parameters Each camera includes a set of "device information" parameters. These parameters provide some basic information about the camera. The device information parameters include: Device Vendor Name (read only) - contains the name of the camera’s vendor. This string will always indicate Basler as the vendor. Device Model Name (read only) - contains the model name of the camera, for example, piA640-210gm.
Standard Features // Read the Device ID parameter Pylon::String_t deviceID = Camera.DeviceFirmwareVersion.GetValue(); // Write and read the Device User ID Camera.DeviceUserID = "custom name"; Pylon::String_t deviceUserID = Camera.DeviceUserID.GetValue(); // Read the Sensor Width parameter int64_t sensorWidth = Camera.SensorWidth.GetValue(); // Read the Sensor Height parameter int64_t sensorHeight = Camera.SensorHeight.GetValue(); // Read the Max Width parameter int64_t maxWidth = Camera.WidthMax.
Standard Features 13.11 Configuration Sets A configuration set is a group of values that contains all of the parameter settings needed to control the camera. There are three basic types of configuration sets: the active configuration set, the default configuration set, and user configurations sets. Active Configuration Set The active configuration set contains the camera’s current parameter settings and thus determines the camera’s performance, that is, what your image currently looks like. When Fig.
Standard Features Default Startup Set You can select the default configuration set or one of the user configuration sets stored in the camera’s non-volatile memory to be the "default startup set." The configuration set that you designate as the default startup set will be loaded into the active set whenever the camera starts up at power on or after a reset. Instructions for selecting the default startup set appear on the next page. 13.11.
Standard Features To load a saved configuration set or the default set from the camera’s non-volatile memory into the active set: Set the User Set Selector to User Set 1, User Set 2, User Set 3, or Default. Execute a User Set Load command to load the selected set into the active set. You can set the User Set Selector and execute the User Set Load command from within your application software by using the pylon API.
Chunk Features 14 Chunk Features This section provides detailed information about the chunk features available on each camera. 14.1 What are Chunk Features? In most cases, enabling a camera feature will simply change the behavior of the camera. The Test Image feature is a good example of this type of camera feature. When the Test Image feature is enabled, the camera outputs a test image rather than a captured image. This type of feature is referred to as a "standard" feature.
Chunk Features 14.2 Making the "Chunk Mode" Active and Enabling the Extended Data Stamp Before you can use any of the camera’s "chunk" features, the "chunk mode" must be made active. Making the chunk mode active does two things: It makes the Frame Counter, the Time Stamp, and the Line Status All chunk features available to be enabled. It automatically enables the Extended Image Data chunk feature. To make the chunk mode active: Set the Chunk Mode Active parameter to true.
Chunk Features included in the pylon API. Once the chunk parser has been used, you can retrieve the extended image data by doing the following: Read the value of the Chunk Offset X parameter. Read the value of the Chunk Offset Y parameter. Read the value of the Chunk Width parameter. Read the value of the Chunk Height parameter. Read the value of the Chunk Pixel Format parameter. Read the value of the Chunk Dynamic Range Min. Read the value of the Chunk Dynamic Range Max.
Chunk Features 14.3 Frame Counter The Frame Counter feature numbers images sequentially as they are acquired. When the feature is enabled, a chunk is added to each image containing the value of the counter. The frame counter is a 32 bit value. The counter starts at 0 and wraps at 4294967296. The counter increments by 1 for each acquired image. Whenever the camera is powered off, the counter will reset to 0.
Chunk Features Result.GetPayloadSize() ); int64_t frameCounter = Camera.ChunkFramecounter.GetValue(); For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference. You can also use the Basler pylon Viewer application to easily set the parameters. For more information about the pylon Viewer, see Section 2.2 on page 20 and Section 2.4 on page 40.
Chunk Features 14.4 Time Stamp The Time Stamp feature adds a chunk to each acquired image containing a time stamp that was generated when image acquisition was triggered. The time stamp is a 64 bit value. The time stamp is based on a counter that counts the number of "time stamp clock ticks" generated by the camera. The unit for each tick is 8 ns (as specified by the Gev Timestamp Tick Frequency). The counter starts at camera reset or at power off/on.
Chunk Features For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference. You can also use the Basler pylon Viewer application to easily set the parameters. For more information about the pylon Viewer, see Section 2.2 on page 20 and Section 2.4 on page 40.
Chunk Features 14.5 Line Status All The Line Status All feature samples the status of all of the camera’s input lines and output lines each time an image acquisition is triggered. It then adds a chunk to each acquired image containing the line status information. The line status all information is a 32 bit value. As shown in Figure 54, certain bits in the value are associated with each line and the bits will indicate the state of the lines.
Chunk Features Camera.ChunkEnable.SetValue( true ); // retrieve data from the chunk IChunkParser &ChunkParser = *Camera.CreateChunkParser(); GrabResult Result; StreamGrabber.RetrieveResult( Result ); ChunkParser.AttachBuffer( (unsigned char*) Result.Buffer(), Result.GetPayloadSize() ); int64_t lineStatusAll = Camera.ChunkLineStatusAll.GetValue(); For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide and API Reference.
Chunk Features 14.6 CRC Checksum The CRC (Cyclic Redundancy Check) Checksum feature adds a chunk to each acquired image containing a CRC checksum calculated using the Z-modem method. As shown in Figure 6-2, the checksum is calculated using all of the image data and all of the appended chunks except for the checksum itself. The CRC chunk is always the last chunk appended to the image data.
Chunk Features Camera.ChunkSelector.SetValue( ChunkSelector_PayloadCRC16 ); Camera.ChunkEnable.SetValue( true ); // Check the CRC checksum of an grabbed image IChunkParser &ChunkParser = *Camera.CreateChunkParser(); GrabResult Result; StreamGrabber.RetrieveResult( Result ); ChunkParser.AttachBuffer( (unsigned char*) Result.Buffer(), Result.GetPayloadSize() ); if ( ChunkParser.HasCRC() && ! ChunkParser.
Chunk Features 216 Basler pioneer
Troubleshooting and Support 15 Troubleshooting and Support This section outlines the resources available to you if you need help working with your camera. 15.1 Technical Support Resources If you need advice about your camera or if you need assistance troubleshooting a problem with your camera, you can contact the Basler technical support team for your area. Technical support contact information is located in the front pages of this manual.
Troubleshooting and Support 15.2 Before Contacting Basler Technical Support To help you as quickly and efficiently as possible when you have a problem with a Basler camera, it is important that you collect several pieces of information before you contact technical support. Copy the form that appears on the next two pages, fill it out, and fax the pages to your local dealer or to your nearest Basler support center.
Troubleshooting and Support 8 How severe is the problem? Camera can still be used. Camera can be used after I take this action: Camera can no longer be used. 9 10 Did your application ever run without problems? Yes No Parameter set It is very important for Basler Technical Support to get a copy of the exact camera parameters that you were using when the problem occurred. To make note of the parameters, use Basler’s "The Viewer" tool.
Troubleshooting and Support 220 Basler pioneer
Revision History Revision History Doc. ID Number Date Changes AW00015101000 9 Feb 2007 Preliminary version of the document. AW00015102000 22 Feb 2007 Updated the camera weight and operating temperatur range. This is still a preliminary version. AW00015103000 24 May 2007 First release covering production cameras. AW00015104000 8 June 2007 Modified Section 2 for the installation of the Basler pylon software, version 1.0. AW00015105000 19 July 2007 Integrated the Kodak KAI-2093 sensor.
Revision History 222 Basler pioneer
Feedback Feedback Your feedback will help us improve our documentation. Please click the link below to access an online feedback form. Your input is greatly appreciated. http://www.baslerweb.com/umfrage/survey.
Feedback 224 Basler pioneer
Index Index A C acquisition frame rate and AOI size.....................................137 maximum allowed ............................137 acquisition frame rate abs parameter........... ...............................................114, 118, 126 acquisition mode parameter ...............................114, 117, 118, 124, 125 acquisition start command ...............................114, 117, 118, 124, 125 AOI see area of interest API......................................................
Index device version parameter.......................200 DHCP IP addressing ................................67 DHCP server ............................................67 dimensions ...................................2, 3, 4, 11 Direct Show driver....................................21 drivers network ..............................................73 G gain explained ......................................... 177 initial setup ........................................ 48 setting................................
Index L O LEDs.......................................................101 lens adapter........................................2, 3, 4 lens thread length.....................................14 level controlled exposure mode..............121 line inverter parameter ...........................169 line selector ....................................167, 169 line selector parameter...........................175 line source parameter.............................167 line status all chunk ..........................
Index R read timeout parameter............................84 receive window ........................................76 receive window size parameter................77 reduced resolution .................................190 resend request batching parameter .........78 resend request response timeout parameter .................................................................80 resend request threshold parameter ........78 resend timeout parameter ........................80 response to light.................