Gigabit Ethernet is a data transmission standard based on the IEEE 802.3 protocol. This standard is the next step in the Ethernet and Fast Ethernet standards that are widely used today. While Ethernet and Fast Ethernet are limited to 10 Mb/s and 100 Mb/s; respectively, the Gigabit Ethernet standard allows transmission of up to 1000 Mb/s. (or about 125 MB/s). You can use a wide array of cabling technology including, CAT 5, fiber optic, and wireless.
The GigE Vision standard takes advantage of several features of Gigabit Ethernet
Figure 1: Various bus technologies and their bandwidths
Like Ethernet and Fast Ethernet, most of the communication packets on Gigabit Ethernet are transported using either the TCP or the UDP protocol. Transmission Control Protocol (TCP or TCP/IP) is more popular than User Datagram Protocol (UDP) because TCP can guarantee packet delivery and packet in-orderness (i.e. Packets are delivered in the same order that they were sent). Such reliability is possible because packet checking and resend mechanisms are built into the TCP protocol. While UDP cannot guarantee packet delivery or in-orderness, it can achieve higher transmission rates since there is less overhead involved in packet checking.
The GigE Vision standard uses the UDP protocol for all communication between the device and the host. A UDP packet consists of an Ethernet Header, an IP Header, a UDP Header, the packet data, and an Ethernet Trailer. The size of the packet data can be up to 1500 bytes. Any data over 1500 bytes is typically broken up into multiple packets. However, for GigE Vision packets, it is more efficient to transmit larger amount of data per packet. Many Gigabit Ethernet network cards support Jumbo Frames. Jumbo frames allow transmission of packet data as large as 9014 bytes.
Figure 2: A UDP Packet
The GigE Vision Standard is owned by the Automated Imaging Association(AIA) and was developed by a group of companies from every sector within the machine vision industry with the purpose of establishing a standard that allows camera and software companies to integrate seamlessly on the Gigabit Ethernet bus. It is the first standard that allows images to be transferred at high speeds over long cable lengths.
While Gigabit Ethernet is a standard bus technology, not all cameras with Gigabit Ethernet ports are GigE Vision compliant. In order to be GigE Vision Compliant, the camera must adhere to the protocols laid down by the GigE Vision standard and must be certified by the AIA. If you are unsure whether your camera supports the GigE Vision standard, look for the following logo in the camera documentation.
Figure 3: Official logo for the GigE Vision Standard
The GigE Vision standard defines the behavior of the host as well as the camera. There are four discrete components to this behavior
When a GigE Vision device is powered on, it attempts to acquire an IP address in the following order:
Since cameras can be added to the network at anytime, the driver must have some way to discover new cameras. To accomplish this, the driver broadcasts a discovery message over the network periodically. Each GigE Vision compliant device responds with its IP address.
The following algorithm describes the device discovery process.
Figure 4: Device discovery flowchart
GVCP allows applications to configure and control a GigE Vision device. The application sends a command using the UDP protocol and waits for an acknowledgment (ACK) from the device before sending the next command. This ACK scheme ensures data integrity. Using this scheme, the application can get and set various attributes on the GigE Vision device, typically a camera.
The GigE Vision standard defines a minimal set of attributes that GigE Vision devices must support. These attributes, such as image width, height, pixel format, etc., are required to acquire an image from the camera and hence are mandatory. However, a GigE Vision camera can expose attributes beyond the minimal set. These additional attributes must confirm to the GenICam standard.
GenICam provides a unified programming interface for exposing arbitrary attributes in cameras. It uses a computer readable XML datasheet, provided by the camera manufacturer, to enumerate all the attributes. Each attribute is defined by its name, interface type, unit of measurement and behavior.
GenICam compliant XML datasheets eliminate the need for custom camera files for each camera. Instead, the manufacturer can describe all the attributes for the camera in the XML file so that any GigE Vision driver can control the camera. Additionally, the GenICam standard recommends naming conventions for features such as gain, shutter speed, and device model that are common to most cameras.
The GigE Vision standard uses UDP packets to stream data from the device to the application. The device includes a GigE Vision header as part of the data packet that identifies the image number, packet number, and the timestamp. The application uses this information to construct the image in user memory.
Figure 5: A GigE Vision Packet
Since image data packets are streamed using the UDP protocol, there is no protocol level handshaking to guarantee packet delivery. Therefore, the GigE Vision standard implements a packet recovery process to ensure that images have no missing data. However, this packet recovery implementation is not required to be GigE Vision compliant. While most cameras will implement packet recovery, some low-end cameras may not implement it.
The GigE Vision header, which is part of the UDP packet, contains the image number, packet number, and timestamp. As packets arrive over the network, the driver transfers the image data within the packet to user memory. When the driver detects that a packet has arrived out of sequence (based on the packet number), it places the packet in kernel mode memory. All subsequent packets are placed in kernel memory until the missing packet arrives. If the missing packet does not arrive within a user-defined time, the driver transmits a resend request for that packet. The driver transfers the packets from kernel memory to user memory when all missing packets have arrived.
The NI-IMAQdx driver is included in Vision Acquisition Software 8.2.1 and higher. Please visit the Drivers page for the latest version of the software. The NI-IMAQdx driver supports IIDC compliant IEEE 1394 (Firewire) cameras, GigE Vision compliant Gigabit Ethernet cameras, and USB3 Vision USB 3.0 cameras within a unified API. This section specifically discusses the architecture of the GigE Vision part of the driver.
In order to better understand the GigE Vision part of the NI-IMAQdx driver, we must first understand the underlying structure of the Windows Network Driver Stack.
Figure 6: The Windows Network Driver Stack
When a GigE Vision image data packet arrives over the network, it can reach the user application using one of two flavors of the NI-IMAQdx driver: the High Performance driver or the Universal driver
Figure 7: The NI-IMAQdx driver stack
When a image data packet arrives, it follows the same path as any other network packet until it is handled by the Protocol driver. Then the packet is passed to the NI-IMAQdx driver which extracts the image data and passes it to the user application.
The high performance driver was developed to circumvent the overhead within the Windows Network Driver Stack. Since the universal driver must use the intermediate driver and the protocol driver to communicate with the hardware specific miniport driver, it requires more CPU processing. The high performance driver bypasses the intermediate driver and protocol driver by communicating directly with the miniport driver. However, the high performance driver will only work with the miniport driver for Intel's Pro 1000 chipset. The NI-IMAQdx kernel implements the protocol driver to handle TCP and UPD packets.
While the high performance driver provides better CPU performance compared to the universal driver, it will only work on Gigabit Ethernet cards with the Intel Pro 1000 chipset. The universal driver can work on any Gigabit Ethernet card recognized by the operating system. CPU usage during acquisition will affect the CPU cycles available for image analysis. So, if your application requires in-line processing, it is better to invest in a Gigabit Ethernet card with the Intel Pro 1000 chipset. While your choice of driver affects the CPU usage, it does not limit the maximum bandwidth possible. Either driver will be able to achieve the maximum bandwidth of 125 MB/s.
High Performance Driver vs. Universal Driver
| Network Card Chipset | CPU usage | Max Bandwidth |
High Performance Driver | Intel Pro 1000 | Best | 125 MB/s |
Universal Driver | Any chipset | Good | 125 MB/s |