Chapter 2. SRX Series Product Lines

In Chapter 1, we focused on SRX Series examples and concepts more than anything, and hopefully this approach has allowed you to readily identify the SRX Series products and their typical uses. In this chapter, we take a deep dive into the products so that you can link the specific features of each to a realistic view of its capabilities. We begin with what is common to the entire SRX Series, and then, as before, we divide the product line into branch and data center categories.

Before the deep dive into each SRX Series product, we must note that each SRX Series platform has a core set of features that are shared across the other platforms. Some of the platforms have different features that are not shared. This might lead to some confusion, because feature parity is not the same across all of the platforms, but the two product lines were designed with different purposes and the underlying architectures vary between the branch and the data center.

The branch SRX Series was designed for small and wide needs, meaning that the devices offer a wide set of features that can solve a variety of problems. This does not mean performance is poor, but rather that the products provide a lot of features.

The data center SRX Series was designed for scale and speed. This means these firewalls can scale from a smaller deployment up to huge performance numbers, all while keeping performance metrics to scale linearly. So, when configuring the modular data center SRX Series device, the designer is able to easily determine how much hardware is required.

Branch SRX Series

The majority of SRX Series firewalls sold and deployed are from within the branch SRX Series, designed primarily for average firewall deployment. Its three-digit product number can identify a branch SRX Series product. The first digit represents the series and the last two digits specify the specific model number. The number is used simply to identify the product; it doesn’t represent performance, the number of ports, or have another special meaning.

When a branch product is deployed in a small office, as either a remote office location or a company’s main firewall, it needs to provide many different features to secure the network and its users. This means it has to be a jack-of-all-trades, and in many cases, it is an organization’s sole source of security.

Branch-Specific Features

Minimizing the number of pieces of network equipment is important in a remote or small office location, as that reduces the need to maintain several different types of equipment, their troubleshooting, and of course, their cost. One key to all of this consolidation is the network switch, and all of the branch SRX Series products provide full switching support. This includes support for spanning tree and line rate blind switching. Table 2-1 is a matrix of the possible number of supported interfaces per platform.

Table 2-1. Branch port matrix
 

SRX100

SRX110

SRX210

SRX220

SRX240

SRX550

SRX650

10/100

8

8

6

0

0

0

0

10/100/1000

0

0

2

8

20

46

52

PoE

0

0

4

8

16

40

48

Fixed WAN

0

1

0

0

0

0

0

SFP

0

0

0

0

0

4

0

As of Junos 12.1X45, the data center SRX Series firewalls do not support blind switching. Although the goal is to provide this feature in the future, it is more cost-effective to utilize a Juniper Networks EX Series Ethernet Switch to provide line rate switching and then create an aggregate link back to a data center SRX Series product to provide secure routing between VLANs. In the future, Juniper may add this feature to its data center SRX Series products.

In most branch locations, SRX Series products are deployed as the only source of security. Because of this, some of the services that are typically distributed can be consolidated into the SRX, such as antivirus. Antivirus is a feature that the branch SRX Series can offer to its local network when applied to the following protocols: Simple Mail Transfer Protocol (SMTP), Post Office Protocol 3 (POP3), Internet Message Access Protocol (IMAP), Hyper Text Transfer Protocol (HTTP), and File Transfer Protocol (FTP). The SRX Series scans for viruses silently as the data is passed through the network, allowing it to stop viruses on the protocols where viruses are most commonly found.

The data center SRX Series does not support the antivirus feature as of Junos 12.1X45. In organizations that deploy a data center SRX Series product, the antivirus feature set is typically decentralized for increased security as well as enabling antivirus scanning while maintaining the required performance for a data center. A bigger focus for security is utilizing IPS to secure connections into servers in a data center. This is a more common requirement than antivirus. The IPS feature is supported on both the high-end and branch SRX Series product lines.

Antispam is another UTM feature set that aids in consolidation of services on the branch SRX Series. Today, it’s reported that almost 95 percent of the email in the world is spam. And this affects productivity. In addition, although some messages are harmless, offering general-use products, others contain vulgar images, sexual overtures, or illicit offers. These messages can be offensive, a general nuisance, and a distraction.

The antispam technology included on the SRX Series can prevent such spam from being received, and it removes the need to use antispam software on another server.

Much like antivirus, the data center SRX Series does not provide antispam services. In data center locations where mail services are intended for thousands of users, a larger solution is needed that is distributed on mail proxies or on the mail servers.

Controlling access to what a user can or can’t see on the Internet is called universal resource locator (URL) filtering. URL filtering allows the administrator to limit what categories of websites can be accessed. Sites that contain pornographic material might seem like the most logical to block, but other types of sites are common too, such as social networking sites that can be time sinks for employees. There is also a class of sites that company policy blocks or temporarily allows access to—for instance, during lunch hour. In any case, all of this is possible on the branch SRX Series products.

For the data center SRX Series product line, URL filtering is not currently integrated. In many large data centers where servers are protected, URL filtering is not needed or is delegated to other products.

Because branch tends to mean small locations all over the world, these branches typically require access to the local LAN for desktop maintenance or to securely access other resources. To provide a low-cost and effective solution, Juniper has introduced the Dynamic VPN client. This IPsec client allows for dynamic access to the branch without any preinstalled software on the client station, a very helpful feature to have in the branch so that remote access is simple to set up and requires very little maintenance.

Dynamic VPN is not available on the data center SRX devices. Juniper Networks recommends the use of its SA Series SSL VPN Appliances, allowing for the scaling of tens of thousands of users while providing a rich set of features that go beyond just network access.

When the need for cost-saving consolidation is strong in certain branch scenarios, adding wireless, both cellular and WiFi, can provide interesting challenges. Part of the challenge concerns consolidating these capabilities into a device while not providing radio frequency (RF) interference; the other part concerns providing a device that can be centrally placed and still receive or send enough wireless power to provide value.

All electronic devices give off some sort of RF interference, and all electronic devices state this clearly on their packaging or labels. Although this might be minor interference in the greater scheme of things, it can also be extremely detrimental to wireless technologies such as cellular Internet access or WiFi—therefore, extreme care is required when integrating these features into any product. Some of the branch SRX Series products have the capability to attach a cellular Internet card or USB dongle directly to them, which can make sense in some small branch locations because, typically, cellular signals are fairly strong throughout most buildings.

But what if the device is placed in the basement where it’s not very effective at receiving these cellular signals? Because of this and other office scenarios, Juniper Networks provides a product that can be placed anywhere and is both powered and managed by the SRX Series: the Juniper Networks CX111 Cellular Broadband Data.

The same challenge carries over for WiFi. If an SRX Series product is placed in a back room or basement, an integrated WiFi access point might not be very relevant, so Juniper took the same approach and provides an external access point (AP) called the AX411 Wireless LAN Access Point. This AP is managed and powered by any of the branch SRX Series products.

As you might guess, although the wireless features are very compelling for the branch, they aren’t very useful in a data center. Juniper has abstained from bringing wireless features to the data center SRX Series products. Instead Juniper recommends deploying the Juniper Wireless LAN solutions based on the Trapeze acquisition.

The first Junos products for the enterprise market were the Juniper Networks J Series Services Routers, and the first iteration of the J Series was a packet-based device. This means the device acts on each packet individually without any concern for the next packet—typical of how a traditional router operates. Over time, Juniper moved the J Series products toward the capabilities of a flow-based device, and this is where the SRX Series devices evolved from.

Although a flow-based device has many merits, it’s unwise to move away from being able to provide packet services, so the SRX Series can run in packet mode as well as flow. It’s even possible to run both modes simultaneously! This allows the SRX Series to act as traditional packet-based routers and to run advanced services such as MPLS.

MPLS as a technology is not new—carrier networks have been using it for years. Many enterprise networks have used MPLS, but typically it has been done transparently to the enterprise. Now, with the SRX Series, the enterprise has a low-cost solution, so it can create its own MPLS network, bringing the power back to the enterprise from the service providers and saving money on MPLS as a managed service. On the flip side, it allows the service providers to offer a low-cost service that can provide security and MPLS in a single platform. MPLS and its family of protocols is fairly complex and is outside of the scope of this book. Please refer to Junos Enterprise Routing for an in-depth look at the subject.

The last feature common to the branch SRX Series products is their ability to utilize many types of WAN interfaces. We will detail these interface types as we drill down into each SRX Series platform.

The data center SRX Series products only utilize Ethernet interfaces. These are the most common interfaces used in the locations where these products are deployed, and where a data center SRX Series product is deployed, they are typically paired with a Juniper Networks MX Series 3D Universal Edge Router, which can provide WAN interfaces.

SRX100 Series

The SRX100 series, as of Junos 12.1X45, has two products in the line (if you remember from the SRX numbering scheme, the 1 is the series number and the 00 is the product number inside that series). The SRX100 Services Gateway is shown in Figure 2-1, and it is a fixed form factor, meaning no additional modules or changes can be made to the product after it is purchased. As you can see in Figure 2-1, the SRX100 has a total of eight 10/100 Ethernet ports, and perhaps more difficult to see, but clearly onboard, are a serial console port and a USB port.

The SRX100
Figure 2-1. The SRX100

The eight Ethernet ports can be configured in many different ways. They could be configured in the traditional manner, in which each port has a single IP address, or they can be configured in any combination as an Ethernet switch. The same switching capabilities of the EX Series switches have been combined into the SRX100 so that the SRX100 not only supports line rate blind switching but also supports several variants of the spanning tree protocols; therefore, if the network is expanded in the future, an errant configuration won’t lead to a network loop. The SRX100 can also provide a default gateway on its local switch by using a VLAN interface, as well as a Dynamic Host Configuration Protocol (DHCP) server.

Although the SRX100 is a small, desktop-sized device, it’s also a high-performing platform. It certainly stands out by providing up to 650 Mbps of throughput. This might seem like an exorbitant amount of throughput for a branch platform, but it’s warranted where security is needed between two local network devices. For such a WAN connection, 650 Mbps is far more than what would be needed in a location that would use this type of device, but small offices have a way of growing.

Speaking of performance, the SRX100 supports high rates of VPN, IPS, and antivirus as well if the need to use these features arises in locations where the SRX100 is deployed. The SRX100 also supports a session ramp-up rate of 2,000 new connections per second (CPS), or the number of new TCP-based sessions that can be created per second. UDP sessions are also supported, but this new-session-per-second metric is rated with TCP because it takes three times the number of packets per second to process than it would UDP to set up a session (see Table 2-2).

Table 2-2. SRX100 capacities

Type

Capacity

 

CPS

1,800

 

Maximum firewall throughput

700 Mbps

 

Maximum IPS throughput

75 Mbps

 

Maximum AppSecure throughout

90 Mpbs

 

Maximum VPN throughput

65 Mbps

 

Maximum antivirus throughput (Sophos AV)

25 Mbps

 

Maximum concurrent sessions

16K (512 MB of RAM)

32K (1 GB of RAM)

Maximum firewall policies

384

 

Maximum concurrent users

Unlimited

 

Although 1,800 new connections per second seems like overkill, it isn’t. Many applications today are written in such a way that they might attempt to grab 100 or more data streams simultaneously. If the local firewall device is unable to handle this rate of new connections, these applications could fail to complete their transactions, leading to user complaints and, ultimately, the cost or loss of time in troubleshooting the network.

Also, because users might require many concurrent sessions, the SRX100 can support up to 32,000 sessions. A session is a current connection that is monitored between two peers, and can be of the more common protocols of TCP and UDP or of other protocols such as Encapsulating Security Payload (ESP) or Generic Routing Encapsulation (GRE).

The SRX100 has two separate memory options: low-memory and high-memory versions. They don’t require a change of hardware, but simply the addition of a license key to activate access to the additional memory. The base memory version uses 512 MB of memory and the high-memory version uses 1 GB of memory. When the license key is added, and after a reboot, the new SRX Series flow daemon is brought online. The new flow daemon is designed to access the entire 1 GB of memory.

Activating the 1 GB of memory does more than just enable twice the number of sessions; it is required to utilize UTM. If any of the UTM features are activated, the total number of sessions is cut back to the number of low-memory sessions. Reducing the number of sessions allows the UTM processes to run. The administrator can choose whether sessions or the UTM features are the more important option.

The SRX100 series has also added a new model: the SRX110. There are two differences between the SRX110 and the SRX100, a built-in VDSL/ADSL2+ port being the biggest difference. This allows for the small device to deliver an integrated WAN port, removing the need to go to a larger device if you need a DSL port. The second difference is the SRX110 only comes in a high-memory model. It is the only SRX that comes with a fixed WAN interface. Figure 2-2 shows off the SRX110.

The SRX110
Figure 2-2. The SRX110

The SRX100 can be placed in one of four different options. The default placement is on any flat surface. The other three require additional hardware to be ordered: vertically on a desktop, in a network equipment rack, or mounted on a wall. The wall mount kit can accommodate a single SRX100, and the rack mount kit can accommodate up to two SRX100 units in a single rack unit.

SRX200 Series

The SRX200 line is the next step up in the branch SRX Series. The goal of the SRX200 line is to provide modular solutions to branch environments. This modularity comes through the use of various interface modules that allow the SRX200 line to connect to a variety of media types such as T1. Furthermore, the modules can be shared among all of the devices in the line.

The first device in the line is the SRX210. It is similar to the SRX100, except that it has additional expansion capabilities and extended throughput. The SRX210 has eight Ethernet ports, like the SRX100 does, but it also includes two 10/100/100 tri-speed Ethernet ports, allowing high-speed devices such as switches or servers to be connected. In addition, the SRX210 can be optionally ordered with built-in Power over Ethernet (PoE) ports. If this option is selected, the first four ports on the device can provide up to 15.4W of power to devices, be they Voice over Internet Protocol (VoIP) phones or Juniper’s AX and CX wireless devices.

Figure 2-3 shows the SRX210. Note in the top right the large slot where the mini-PIM is inserted. The front panel includes the eight Ethernet ports. Similar to the SRX100, the SRX210 includes a serial console port and, in this case, two USB ports. The eight Ethernet ports can be used (just like the SRX100) to provide line rate blind switching, a traditional Layer 3 interface, or both.

The rear of the box contains a surprise. In the rear left, as depicted in Figure 2-4, an ExpressCard slot is shown. This ExpressCard slot can utilize 3G or cellular modem cards to provide access to the Internet, which is useful for dial backup or the new concept of a zero-day branch. In the past, when an organization wanted to roll out branches rapidly, it required the provisioning of a private circuit or a form of Internet access. It might take weeks or months to get this service installed. With the use of a 3G card, a branch can be installed the same day, allowing organizations and operations to move quickly to reach new markets or emergency locations. Once a permanent circuit is deployed, the 3G card can be used for dial backup or moved to a new location.

The front of the SRX210
Figure 2-3. The front of the SRX210
The back of the SRX210
Figure 2-4. The back of the SRX210

The performance of the SRX210 is within the range of the SRX100, but it is a higher level of performance than the SRX100 across all of its various capabilities. As you can see in Table 2-3, the overall throughput increased from 650 Mbps on the SRX100 to 750 Mbps on the SRX210. The same goes for the IPS, VPN, and antivirus throughputs. They each increased by about 10 percent over the SRX100. A significant change is the fact that the total number of sessions doubles, for both the low-memory and high-memory versions. That is a significant advantage in addition to the modularity of the platform. In 2012 the SRX210 platform was silently upgraded, giving newer devices increased throughput over the original edition. The new SRX210 is called the enhanced version, but the name is interchangeable.

Table 2-3. SRX210 capacities

Type

Capacity

 

CPS

2,200

 

Maximum firewall throughput

850 Mbps

 

Maximum IPS throughput

65 Mbps

 

Maximum AppSecure throughput

250 Mbps

 

Maximum VPN throughput

85 Mbps

 

Maximum antivirus throughput (Sophos AV)

30 Mbps

 

Maximum concurrent sessions

32K (512 MB of RAM)

64K (1 GB of RAM)

Maximum firewall policies

512

 

Maximum concurrent users

Unlimited

 

The SRX210 consists of three hardware models: the base memory model, the high-memory model, and the PoE with high-memory model (it isn’t possible to purchase a base memory model and PoE). Unlike the SRX100, the memory models are actually fixed and cannot be upgraded with a license key. So when planning for a rollout with the SRX210, it’s best to plan ahead in terms of what you think the device will need. The SRX210 also has a few hardware accessories: it can be ordered with a desktop stand, a rack mount kit, or a wall mount kit. The rack mount kit can accommodate one SRX210 in a single rack unit.

The SRX220 fits cleanly between the SRX210 and SRX240. The SRX220 is characterized by having 8x10/100/1000 ports and two mini-PIM slots. The SRX220 does not have an ExpressCard slot, but it can use its onboard USB port to connect 3G modems (see Figure 2-5 and Figure 2-6). The SRX220 is a great choice for a branch network where you require additional mini-PIM slots and up to eight tri-speed Ethernet ports. All eight Ethernet ports also offer Power over Ethernet.

The front of the SRX220
Figure 2-5. The front of the SRX220
The back of the SRX220
Figure 2-6. The back of the SRX220

As expected, the performance of the SRX220 does fit cleanly between the SRX240 and SRX210 (see Table 2-4). The SRX220 offers a marginal performance boost over the SRX210 but only offers about half the performance of the SRX240. The SRX220 is a good fit into a network when more connectivity than the SRX210 is needed but the pricing of the SRX240 is too much.

Table 2-4. SRX220 capacities

Type

Capacity

CPS

2,800

Maximum firewall throughput

950 Mbps

Maximum IPS throughput

80 Mbps

Maximum AppSecure throughput

300 Mbps

Maximum VPN throughput

100 Mbps

Maximum antivirus throughput (Sophos AV)

35 Mbps

Maximum concurrent sessions

96K

Maximum firewall policies

2048

Maximum concurrent users

Unlimited

The SRX240 is the first departure from the small desktop form factor, as it is designed to be mounted in a single rack unit. It also can be placed on the top of a desk and is about the size of a pizza box. The SRX240, unlike the other members of the SRX200 line, includes sixteen 10/100/1000 Ethernet ports, but like the other two platforms, line rate switching can be achieved between all of the ports that are configured in the same VLAN. It’s also possible to configure interfaces as a standard Layer 3 interface, and each interface can also contain multiple subinterfaces. Each subinterface is on its own separate VLAN. This is a capability that is shared across all of the SRX product lines, but it’s typically used on the SRX240 because the SRX240 is deployed on larger networks.

Figure 2-7 shows the SRX240, and you should be able to see the sixteen 10/100/1000 Ethernet ports across the bottom front of the device. There’s the standard fare of one serial console port and two USB ports, and on the top of the front panel of the SRX240 are the four mini-PIM slots. These slots can be used for any combination of supported mini-PIM cards.

The SRX240
Figure 2-7. The SRX240

The performance of the SRX240 is double that of the other platforms. It’s designed for midrange to large branch locations and can handle more than eight times the connections per second, for up to 9,000 CPS. Not only is this good for outbound traffic, but it is also great for hosting small- to medium-size services behind the device—including web, DNS, and email services, which are typical services for a branch network. The throughput for the device is enough for a small network, as it can secure more than 1 gigabit per second of traffic. This actually allows several servers to sit behind it and for the traffic to them from both the internal and external networks to be secured. The device can also provide for some high IPS throughput, which is great for inspecting traffic as it goes through the device from untrusted hosts.

Again, Table 2-5 shows that the total number of sessions on the device has doubled from the lower models. The maximum rate of 128,000 sessions is considerably large for most networks. Just as you saw on the SRX210, the SRX240 provides three different hardware models: the base memory model that includes 512 MB of memory (it’s unable to run UTM and runs with half the number of sessions); the high-memory version, which has twice the amount of memory on the device (it’s able to run UTM with an additional license); and the high-memory with PoE model that can provide PoE to all 16 of its built-in Ethernet ports. In 2012, the SRX240 was silently bumped up to what is known as the enhanced model. This model offers up to 2 GB of RAM, which boosts the overall capacity of the device. Also the CPU was slightly increased to offer some additional throughput. The SRX240 enhanced and the SRX240 model names are used interchangeably. The only real restriction is that the older and newer SRX240s cannot be clustered together.

Table 2-5. SRX240 capacities

Type

Capacity

 

CPS

8,500

 

Maximum firewall throughput

1.8 Gbps

 

Maximum IPS throughput

230 Mbps

 

Maximum VPN throughput

300 Mbps

 

Maximum antivirus throughput

85 Mbps

 

Maximum concurrent sessions

128K (1 GB of RAM)

256K (2 GB of RAM)

Maximum firewall policies

4,096

 

Maximum concurrent users

Unlimited

 

Interface modules for the SRX200 line

The SRX200 Series Services Gateways currently support six different types of mini-PIMs, as shown in Table 2-6. On the SRX240 these can be mixed and matched to support any combination that the administrator chooses, offering great flexibility if there is a need to have several different types of WAN interfaces. The administrator can also add up to a total of four Small Form-Factor Pluggable Interface Modules (SFP) mini-PIM modules on the SRX240, giving it a total of 20 gigabit Ethernet ports. The SFP ports can be either a fiber optic connection or a copper twisted pair link. The SRX210 can only accept one card at a time, so there isn’t a capability to mix and match cards, although, as stated, the SRX210 can accept any of the cards. Although the SRX210 is not capable of inspecting gigabit speeds of traffic, a fiber connection might be required in the event that a long haul fiber is used to connect the SRX210 to the network.

Table 2-6. Mini-PIMs

Type

Description

ADSL

1-port ADSL2+ mini-PIM supporting ADSL/ADSL2/ADSL2+ Annex A

ADSL

1-port ADSL2+ mini-PIM supporting ADSL/ADSL2/ADSL2+ Annex B

G.SHDSL

8-wire (4-pair) G.SHDSL mini-PIM

Serial

1-port Sync Serial mini-PIM

SFP

1-port SFP mini-PIM

T1/E1

DOCIS 3.0 Cable Model

1-port T1/E1 mini-PIM

1-port 75 OHM coaxial cable

The ADSL cards support all of the modern standards for DSL and work with most major carriers. The G.SHDSL standard is much newer than the older ADSL, and it is a higher speed version of DSL that is provided over traditional twisted pair lines. Among the three types of cards, all common forms of ADSL are available to the SRX200 line.

New to the mini-PIM line is the DOCSIS 3.0 card. This card allows an SRX200 Series device to act as a cable modem. This is quite desirable as cable modems are both stable and fast enough for most businesses today.

The SRX200 line also supports the use of the tried-and-true serial port connection. This allows for connection to an external serial port and is the least commonly used interface card. A more commonly used interface card is the T1/E1 card, which is typical for WAN connection to the SRX200 line. Although a T1/E1 connection might be slow by today’s standards, compared to the average home broadband connection, it is still commonly used in remote branch offices.

SRX500 Series

The SRX500 line is a device that sits between the SRX240 and the SRX650. It is designed to offer cost-effective, high-performing security to the branch market. It offers G-PIM, X-PIM, and mini-PIM support for both WAN and LAN interfaces. This is the largest device that offers mini-PIM support. This series of devices was created when customers craved the performance of an SRX650 but wanted mini-PIM support in a more cost-effective package. There is currently only one product in this series, the SRX550 (see Figure 2-8).

The SRX550
Figure 2-8. The SRX550

The performance of the SRX550 (see Table 2-7) is a bit more than double that of the SRX240 and about 30 percent less than that of the SRX650. Its performance is very strong for its price point in the SRX line. It includes 10 fixed ports, 6 of which are tri-speed copper Ethernet ports and the other 4 of which are SFPs. The SRX550 supports dual power supplies, and the base power supplies include the capability to provide partial Power over Ethernet support. To provide full PoE powering, two power supplies are required. When enabling UTM on the SRX550, the maximum session capacity is cut in half to allow for the additional UTM processing. At a base of 375,000 concurrent sessions, even with UTM enabled, there still should be a significant amount of session capacity available for most environments.

Table 2-7. SRX550 capacities

Type

Capacity

CPS

27,000

Maximum firewall throughput

5.5 Gbps

Maximum IPS throughput

800 Mbps

Maximum VPN throughput

1 Gbps

Maximum antivirus throughput

300 Mbps

Maximum concurrent sessions

375K

Maximum firewall policies

7,256

Maximum concurrent users

Unlimited

SRX600 Series

The SRX600 line is the most different from the others in the branch SRX Series. This line is extremely modular and offers very high performance for a device that is categorized as a branch solution.

The only model in the SRX600 line (at the time of this writing) is the SRX650. The SRX650 comes with four onboard 10/100/1000 ports. All the remaining components are modules. The base system comes with the chassis and a component called the Services and Routing Engine (SRE). The SRE provides the processing and management capabilities for the platform. It has the same architecture as the other branch platforms, but this time the component for processing is modular.

Figure 2-9 shows the front of the SRX650 chassis, and the four onboard 10/100/1000 ports are found on the front left. The other items to notice are the eight modular slots, which are different here than in the other SRX platforms. Here the eight slots are called G-PIM slots, but it is also possible to utilize another card type called an X-PIM, which utilizes multiple G-PIM slots.

On the back of the SRX650 is where the SRE is placed. There are two slots that fit the SRE into the chassis, but note that as of the Junos 13.1 release, only the bottom slot can be used. In the future, the SRX650 might support a new double-height SRE, or even multiple SREs. On the SRE, there are several ports: first, the standard serial console port, and then a secondary serial auxiliary port, shown in the product illustration in Figure 2-10. Also, the SRE has two USB ports.

The front of the SRX650
Figure 2-9. The front of the SRX650
The back of the SRX650
Figure 2-10. The back of the SRX650

New to this model is the inclusion of a secondary compact flash port. This port allows for expanded storage for logs or software images. The SRX650 also supports up to two power supplies for redundancy.

The crowning feature of the SRX650 is its performance capabilities. The SRX650 is more than enough for most branch office locations, allowing for growth in the branch office. As shown in Table 2-8, it can provide up to 30,000 new CPS, which is ample for a fair bit of servers that can be hosted behind the firewall. It also accounts for a large number of users that can be hosted behind the SRX. The total number of concurrent sessions is four times higher than on the SRX240, with a maximum of 500,000 sessions. Only 250,000 sessions are available when UTM is enabled; the other available memory is shifted for the UTM features to utilize.

Table 2-8. SRX650 capacities

Type

Capacity

CPS

30,000

Maximum firewall throughput

7 Gbps

Maximum IPS throughput

1.5 Gbps

Maximum VPN throughput

1.5 Gbps

Maximum antivirus throughput

350 Mbps

Maximum concurrent sessions

512K (2 GB of RAM)

Maximum firewall policies

8,192

Maximum concurrent users

Unlimited

The SRX650 can provide more than enough throughput on the device, and it can provide local switching as well. The maximum total throughput is 7 gigabits per second. This represents a fair bit of secure inspection of traffic in this platform. Also, for the available UTM services it provides, it is extremely fast. IPS performance exceeds 1 gigabit as well as VPN. The lowest performing value is the inline antivirus, and although 350 Mbps is far lower than the maximum throughput, it is very fast considering the amount of inspection that is needed to scan files for viruses.

Interface modules for the SRX600 line

The SRX650 has lots of different interface options that are not available on any other platform today. This makes the SRX650 fairly unique as a platform compared to the rest of the branch SRX Series. The SRX650 can use two different types of modules: the G-PIM and the X-PIM. The G-PIM occupies only one of the possible eight slots, whereas an X-PIM takes a minimum of two slots, and some X-PIMs take a maximum of four slots. Table 2-9 lists the different interface cards.

Table 2-9. SRX600 interface matrix

Type

Description

Slots

Dual T1/E1

Dual T1/E1, two ports with integrated CSU/DSU – G-PIM. Single G-PIM slot.

1

Quad T1/E1

Quad T1/E1, four ports with integrated CSU/DSU – G-PIM. Single G-PIM slot.

1

16-port 10/100/1000

Ethernet switch 16-port 10/100/1000-baseT X-PIM.

2

16-port 10/100/1000 PoE

Ethernet switch 16-port 10/100/1000-baseT X-PIM with PoE.

2

24-port 10/100/1000 plus four SFP ports

Ethernet switch 24-port 10/100/1000-baseT X-PIM. Includes four SFP slots.

4

24-port 10/100/1000 PoE plus four SFP ports

PoE Ethernet switch 24-port 10/100/1000-baseT X-PIM. Includes four SFP slots.

4

Two different types of G-PIM cards provide T1/E1 ports. One provides two T1/E1 ports and the other provides a total of four ports. These cards can go in any of the slots on the SRX650 chassis, up to the maximum of eight slots.

The next type of card is the dual-slot X-PIM. These cards provide sixteen 10/100/1000 ports and come in the PoE or non-PoE variety. Using this card takes up two of the eight slots. They can only be installed in the right side of the chassis, with a maximum of two cards in the chassis.

The third type of card is the quad-slot X-PIM. This card has 24 10/100/1000 ports and 4 SFP ports and comes in a PoE and non-PoE version. The SFP ports can use either fiber or twisted pair SFP transceivers. Figure 2-11 shows the possible locations of each type of card.

SRX650 PIM card diagram
Figure 2-11. SRX650 PIM card diagram

Local switching can be achieved at line rate for ports on the same card, meaning that on each card, switching must be done on that card to achieve line rate. It is not possible to configure switching across cards. All traffic that passes between cards must be inspected by the firewall, and the throughput is limited to the firewall’s maximum inspection. Administrators who deploy the SRX should be aware of this limitation.

JunosV Firefly (Virtual Junos)

Today, buying computing power is cheap. For a few thousand dollars, one can buy a server with 12 or more processing cores and hundreds of gigabytes of memory. Because of this, the shift to virtualization has been occurring over the last several years. Today, you can contain what used to be an entire data center on just a few servers, so the virtualizing networking is a necessity. Originally, just switching was virtualized as part of the hypervisor (the software that provides an abstraction layer between the hardware and the virtual OS), but it is common to have entire networks exist within a server. Every network needs a border, and for servers, that would be a firewall.

The most popular hypervisor for virtualization is made by VMware. Because of this, the initial release of the product supports only VMware. In the future, Firefly will support other hypervisors such as Xen and KVM. As of early 2013, JunosV Firefly is in controlled availability, but it will be openly available soon.

AX411

The AX411 Wireless LAN Access Point is not an SRX device, but more of an accessory to the branch SRX Series product line. The AX411 cannot operate on its own without an SRX Series appliance. To use the AX411 device, simply plug it into an SRX device that has DHCP enabled and an AX411 license installed. The AP will get an IP address from the SRX and register with the device, and the configuration for the AX411 will be pushed down from the SRX to the AX411. Then queries can be sent from the SRX to the AX411 to get status on the device and its associated clients. Firmware updates and remote reboots are also handled by the SRX product.

The AX411 is designed to be placed wherever it’s needed: on a desktop, mounted on a wall, or inside a drop ceiling. As shown in Figure 2-12, the AX411 has three antennas and one Ethernet port. It also has a console port, which is not user-accessible.

The AX411 WLAN Access Point
Figure 2-12. The AX411 WLAN Access Point

The AX411 has impressive wireless capabilities, as it supports 802.11a/b/g/n wireless networking. The three antennas provide multiple input–multiple output (MIMO) for maximum throughput. The device features two separate radios, one at the 2.4 GHz range and the other at the 5 GHz range. For the small branch, it meets all of the requirements of an AP. The AX411 is not meant to provide wireless access for a large campus network, so administrators should not expect to be able to deploy dozens of AX411 products in conjunction; the AX411 is not designed for this purpose.

Each SRX device in the branch SRX Series is only capable of managing a limited number of AX411 appliances, and Table 2-10 shows the number of APs per platform that can be managed. The SRX100 can manage up to two AX411 devices. From there, each platform doubles the total number of APs that can be managed, going all the way up to 16 APs on the SRX650.

Table 2-10. Access points per platform

Platform

Number of access points

SRX100

2

SRX210

4

SRX240

8

SRX650

16

CX111

The CX111 Cellular Broadband Data Bridge (see Figure 2-13) can be used in conjunction with the branch SRX Series products. The CX111 is designed to accept a 3G (or cellular) modem and then provide access to the Internet via a wireless carrier. The CX111 supports about 40 different manufacturers of these wireless cards and up to 3 USB wireless cards and 1 ExpressCard. Access to the various wireless providers can be always-on or dial-on-demand.

The CX111
Figure 2-13. The CX111

There aren’t any specific hooks between the CX111 bridge and the SRX products. The CX111 can be utilized in combination with any branch product to act as a wireless bridge. The biggest benefit is that the CX111 can be placed anywhere that a wireless signal can be best reached, so the CX111 can be powered by using PoE or a separate power supply. This way, the SRX device can be placed in a back closet or under a counter, and the CX111 can be placed by a window.

Branch SRX Series Hardware Overview

Although the branch SRX Series varies greatly in terms of form factors and capabilities, the underlying hardware architecture remains the same. Figure 2-14 is highly simplified, but it is meant to illustrate how the platforms have a common architecture. It also provides a certain clarity to how the data center SRX Series looks when compared to the branch SRX Series.

In the center of Figure 2-14 is the shared computer resource or processor. This processor is specifically designed for processing network traffic and is intended for scaling and to provide parallel processing. With parallel processing, more than one task can be executed at a time. In this case, parallel processing is achieved by having multiple hardware cores running separate threads of execution. See the sidebar .

Branch SRX Series hardware overview
Figure 2-14. Branch SRX Series hardware overview

Connected off of this processor are the serial console and the USB ports. This allows the user to access the running system directly off of the serial console and any attached storage off of the USB ports.

Finally, in the overview are the interfaces. The interfaces connect off of the processor, and all of the onboard ports from each platform are connected as a local Ethernet switch. This is the same for all of the SRX products. Each WAN card is treated as a separate link back to the processor, and in the case of the SRX650, each Ethernet card is its own switch and then connects back to the processor. Although oversimplified, this should provide a simple understanding of what is happening inside the sheet metal.

Licensing

The branch SRX Series supports numerous built-in features, including firewalling, routing, VPN, and NAT. However, some of the features require licensing to activate. This section is meant to clarify the licensing portion of the SRX products. Table 2-11 breaks out all of the possible licenses by vendor, description, and terms.

In regard to Table 2-11, note the following:

  • You can purchase a single license for all of the UTM features, including the antivirus, antispam, intrusion protection, and web filtering features.

  • Dynamic VPN is sold as a per-seat license, which counts the number of active users utilizing the features. This feature is only supported on the SRX100, SRX210, and SRX240.

  • The SRX650 and SRX550 can support the ability to act as a Border Gateway Protocol (BGP) route reflector. This is effectively a route server that can share routes to other BGP hosts. This is licensed as a separate feature and is only applicable to the SRX650 and SRX550.

  • To manage an AX411 AP, a license is required. Two licenses are included with the purchase of the AX411; additional licenses can be purchased separately.

Table 2-11. Licensing options

Type

Vendor

Description

Terms

Antivirus

Juniper-Kaspersky/Juniper-Express/Juniper-Sophos

Antivirus updates

1-, 3-, or 5-year

Antispam

Juniper-Sophos

Antispam updates

1-, 3-, or 5-year

Intrusion protection

Juniper

Attack updates

1-, 3-, or 5-year

Web filtering

Websense

Category updates

1-, 3-, or 5-year

AppSecure

Juniper

Attack and IPS updates

1-, 3-, or 5-year

Combined set

All of the above

All of the above

1-, 3-, or 5-year

Dynamic VPN client

Juniper

Concurrent users for Dynamic VPN,

SRX100, SRX210, and SRX240 only

5, 10, 25, 50, 250, or 500 users, permanent

BGP router reflector

Juniper

Route reflector capability, SRX650 or SRX550 only

Permanent

AX411 access point

Juniper

License to run AX411

Included with access point

Branch Summary

The branch SRX Series product line is extremely well rounded. In fact, it is the most fully featured, lowest-cost Junos platform that Juniper Networks offers. (This is great news for anyone who wants to learn how to use Junos and build a small lab.)

The branch SRX Series has both flow and packet modes, allowing anyone to test flow-based firewalling and packet-based routing. It features the same routing protocol support as all Junos-based devices, from BGP to Intermediate System-to-Intermediate System (IS-IS). It has the majority of the EX Series switching features with the same configuration set. Most important for study, it also supports MPLS and VPLS. No other router platform supports these features at such an attractive price point.

In terms of the hardware in the branch SRX Series, the underlying device is fairly simple. It does not utilize any of the routing application-specific integrated circuits (ASICs) from the high-end routers or data center SRX Series products. Some behaviors on these features might vary across platforms, so it is not feasible to try to make a sub-$1,000 platform and have the exact same silicon as a million-dollar device. Those behaviors are noted in the documentation and throughout this book where applicable.

The branch SRX Series product line is the most accessible platform for a majority of this book’s readers. And because of its lower cost, there will be many more branch SRX Series products in the field.

Where differences exist between these SRX platforms, they will be noted so that you can learn these discrepancies and take them to the field, but note that many features are shared, so there will not be large differences across platforms. Zones and firewall policies remain the same across platforms, so you will see few differences when this book delves into this material.

Data Center SRX Series

The data center SRX Series product line is designed to be scalable and fast for data center environments where high performance is required. Unlike the branch products, the data center SRX Series devices are highly modular—a case in point is the base chassis for any of the products, which does not provide any processing power to process traffic because the devices are designed to scale in performance as cards are added. (It also reduces the total amount of investment that is required for an initial deployment.)

There are three lines of products in the data center SRX Series: the SRX1000, SRX3000, and the SRX5000 line. Each uses almost identical components, which is great because any testing done on one platform can carry over to the other. It’s also easier to have feature parity between the two product lines because the data center SRX Series has specific ASICs and processors that cannot be shared unless they exist on both platforms. Where differences do exist, trust that they will be noted.

The SRX1000 line is the smallest of the three, designed for small- to medium-size data centers and Internet edge applications. A step up from the SRX1000 line is the SRX3000 series. This series offers a more configurable midsized device. The SRX5000 line is the largest services gateway that Juniper offers. It is designed for medium to very large data centers and it can scale from a moderate to an extreme performance level.

All three platforms are open for flexible configuration, allowing the network architect to essentially create a device for her own needs. Because processing and interfaces are both modular, it’s possible to create a customized device, such as one with more IPS with high inspection and lower throughput. Here, the administrator would add fewer interface cards but more processing cards, allowing only a relatively small amount of traffic to enter the device but providing an extreme amount of inspection. Alternatively, the administrator can create a data center SRX with many physical interfaces but limited processors for inspection. All of this is possible with the data center SRX Series.

Data Center SRX-Specific Features

The data center SRX Series products are built to meet the specific needs of today’s data centers. They share certain features that require the same underlying hardware to work as well as the need for such features—it’s important to be focused on meeting the needs of the platform.

In the data center, IPS is extremely important in securing services, and the data center SRX Series devices have several features for IPS that are currently not available for the branch SRX Series devices. Inline tap mode is one such feature for the data-center-specific SRX platform, allowing the SRX to copy any off sessions as they go through the device. The SRX will continue to process the traffic in Intrusion Detection and Prevention (IDP), as well as passing the traffic out of the SRX, but now it will alert (or log) when an attack is detected, reducing the risk of encountering a false positive and dropping legitimate traffic.

Another specific feature that is common to the data center SRX Series is that they can be configured in what is known as dedicated mode. The data center SRX Series firewalls have dense and powerful processors, allowing flexibility in terms of how they can be configured. And much like adding additional processing cards, the SRX processors themselves can be tuned. Dedicated mode allows the SRX processing to be focused on IDP, and the overall throughput for IDP increases, as do the maximum session counts.

Because the branch SRX Series products utilize different processors, it is not possible to tune them for dedicated mode.

We cover many of these features, and others, throughout this book in various chapters and sections. Use the index at the end of the book as a useful cross-reference to these and other data center SRX Series features.

SPC

The element that provides all of the processing on the SRX Series is called the Services Processing Card (SPC). An SPC contains one or more Services Processing Units (SPUs). The SPU is the processor that handles all of the services on the data center SRX Series firewalls, from firewalling, NAT, and VPN to session setup and anything else the firewall does. There are two generations of the SPC. The first generation SPC is simply called the SPC. For most of this section we cover that version of the SPC. Later in the SRX5000 series section we discuss the NG-SPC, which is only available for the platform. The largest difference between the two is that the NG-SPC offers two times the number of processors and the processors offer more advanced performance.

Each SPU provides extreme multiprocessing and can run 32 parallel tasks simultaneously. A task is run as a separate hardware thread (see the sidebar “Parallel Processing” earlier in this chapter for an explanation of hardware threads). This equates to an extreme amount of parallelism. An SPC can operate in four modes: full central point, small central point, half central point, and full flow. SPUs that operate in both central point and flow mode are said to be in combo mode. Based on the mode, the number of hardware threads will be divided differently.

The SPU can operate in up to four different distributions of threads, which breaks down to two different functions that it can provide: the central point and the flow processor. The central point (CP) is designed as the master session controller. The CP maintains a table for all of the sessions that are active on the SRX—if a packet is ever received on the SRX that is not matched as part of an existing session, it is sent to the CP. The CP can then check against its session table and see if there is an existing session that matches it. (We discuss the new session setup process in more detail shortly, once all of the required components are explained.)

The CP has three different settings so that users can scale the SRX appropriately. The CP is used as part of the new session setup process or new CPS. The process is distributed across multiple components in the system. It would not make sense to dedicate a processor to provide maximum CPS if there were not enough of the other components to provide this. So, to provide a balanced performance, the CP is automatically tuned to provide CPS capabilities to the rest of the platform. The extra hardware threads that are remaining go back into processing network traffic. At any one time, only one processor is acting as the CP, hence the term central point.

The remaining SPUs in the SRX are dedicated to process traffic for services. These processors are distributed to traffic as part of the new session setup process. Because each SPU eventually reaches a finite amount of processing, as does any computing device, an SPU will share any available computing power it has among the services. If additional processing power is required, more SPUs can be added. Adding more SPUs provides near-linear scaling for performance, so if a feature is turned on that cuts the required performance in half, simply adding another SPU will bring performance back to where it was.

The SPU’s linear scaling makes it easier to plan a network. If needed, a minimal number of SPUs can be purchased up front, and then, over time, additional SPUs can be added to grow with the needs of the data center. To give you an indication of the processing capabilities per SPU, Table 2-12 shows off the horsepower available.

Table 2-12. SPU processing capacities

Item

Capacity

Packets per second

1,100,000

New CPS

50,000

Firewall throughput

10 Gbps

IPS throughput

2.5 Gbps

VPN throughput

2.5 Gbps

Each SPC in the SRX5000 line has two SPUs, and each SPC in the SRX1000/SRX3000 lines has a single SPU. As more processing cards are added, the SRX gains the additional capabilities listed in Table 2-12, so when additional services such as logging and NAT are turned on and the capacity per processor decreases slightly, additional processors can be added to offset the performance lost by adding new services.

NPU

The Network Processing Unit (NPU) is similar in concept to the SPU, whereby the NPU resides on either an input/output card (IOC) or its own Network Processing Card (NPC) based on the SRX platform type (in the SRX5000 line, the NPU sits on the IOC; in the SRX1000/3000 lines, it is on a separate card).

When traffic enters an interface card, it has to pass through an NPU before it can be sent on for processing. The physical interfaces and NPCs sit on the same interface card, so each interface or interface module has its own NPU. In the SRX3000 line, each interface card is bound to one of the NPUs in the chassis, so when the SRX3000 line appliances boot, each interface is bound to an NPU in a round-robin fashion until each interface has an NPU. It is also possible to manually bind the interfaces to the NPUs through this configuration.

The biggest difference in the design of the SRX1000/3000 and SRX5000 lines’ usage of NPUs concerns providing a lower cost platform to the customer. Separating the physical interfaces from the NPU reduces the overall cost of the cards. Optionally, the SRX now offers a 10 GB Ethernet card that has an integrated IOC + NPC. This is described in more detail later. This allows users to utilize the low latency firewall (LLFW) features.

The NPU is used as a part of the session setup process to balance packets as they enter the system. The NPU takes each packet and balances it to the correct SPU that is handling that session. In the event that there is not a matching session on the NPU, it forwards the packet to the CP to figure out what to do with it.

Each NPU can process about 6.5 million packets per second inbound and about 16 million packets outbound. This applies across the entire data center SRX Series platform. The method the NPU uses to match a packet to a session is based on matching the packet to its wing table; a wing is half of a session and one part of the bidirectional flow. Figure 2-15 depicts an explanation of a wing in relation to a flow.

Sessions and wings
Figure 2-15. Sessions and wings

The card to which the NPU is assigned determines how much memory it will have to store wings (some cards have more memory, as there are fewer components on them). Table 2-13 lists the number of wings per NPU. Each wing has a five-minute keepalive. If five minutes pass and a packet matching the wing hasn’t passed, the wing is deleted.

Table 2-13. Number of wings per NPU

Card type

NPUs per card

Wings per NPU

4x10G SRX5000

4

3 million

40x1G SRX5000

4

3 million

Flex I/O SRX5000

2

6 million

NPC SRX1000/3000

1

6 million

NP-IOC

1

6 million

It is possible that the wing table on a single SPU can fill up, and it is a possibility in the SRX5000 line because the total number of sessions exceeds the total number of possible wings on a single NPU. To get around this, Juniper introduced a feature called NPU bundling in Junos 9.6, allowing two or more NPUs to be bundled together. The first NPU is used as a load balancer to balance packets to the other NPUs, and then the remaining NPUs in the bundle are able to process packets. This benefits not only the total number of wings, but also the maximum number of ingress packets per second. NPUs can be bundled on or across cards with up to 16 NPUs to be used in a single bundle, and up to 8 different bundles can be created. You can also use link aggregation to balance traffic across all of the NPUs in a link bundle. Generally, filling up wings on NPUs are not a problem for customers. Only in extreme cases is this ever an issue, so for most customers this will never be a problem.

Additionally, in 12.1X44, an alternate mechanism was added to balance the traffic to the SPUs. This offers a more robust way to prevent the central point from being overwhelmed.

The NPU also provides other functions, such as a majority of the screening functions. A screen is an intrusion detection function. These functions typically relate to single packet matching or counting specific packet types. Examples of this are matching land attacks or counting the rate of TCP SYN packets. The NPU also provides some QoS functions.

Data Center SRX Series Session Setup

We discussed pieces of the session setup process in the preceding two sections, so here let’s put the entire puzzle together. It’s an important topic to discuss, because it is key to how the SRX balances traffic across its chassis. Figure 2-16 shows the setup we use for our explanation.

Hardware setup
Figure 2-16. Hardware setup

Figure 2-16 depicts two NPUs: one for ingress traffic and the other for egress traffic. It also shows the CP. For this example, the processor handling the CP function will be dedicated to that purpose. The last component shown is the flow SPU, which is used to process the traffic flow.

Figure 2-17 shows the initial packet coming into the SRX. For this explanation, a TCP session will be created. This packet is first sent to the ingress NPU, where the ingress NPU checks against its existing wings. Because there are no existing wings, the NPU then must forward the packet to the CP, where the CP checks against its master session table to see if the packet matches an existing flow. Because this is the first packet into the SRX, and no sessions exist, the CP recognizes this as a potential new session.

The first packet
Figure 2-17. The first packet

The packet is then sent to one of the flow SPUs in the system using the weighted round-robin algorithm.

Each SPU is weighted. A full SPU is given a weight of 100, a combo-mode SPU is given a weight of 60 if it’s a majority flow and a small CP, and a half-CP and half-flow SPU is given a weight of 50. This way, when the CP is distributing new sessions, the sessions are evenly distributed across the processors.

In Figure 2-17, there is only a single SPU, so the packet is sent there.

The SPU does a basic sanity check on the packet and then sets up an embryonic session, which lasts for up to 20 seconds. The CP is notified of this embryonic session. The remaining SYN-ACK and ACK packets must be received before the session will be fully established. Before the session is completely established, the NPUs will forward the SYN-ACK and ACK packets to the CP and the CP then must forward them to the correct SPU, which it does here because the SPU has the embryonic session in its session table.

In Figure 2-18, the session has been established. The three steps in the three-way handshake have completed. Once the SPU has seen the final ACK packet, it completes the session establishment in the box, first sending a message to the CP to turn the embryonic session into a complete session, and then starting the session timer at the full timeout for the protocol. Next, the SPU notifies the ingress NPU. Once the ingress NPU receives a message, it installs a wing. This wing identifies this session and then specifies which SPU is responsible for the session. When the ACK packet that validated the establishment of the session is sent out of the SRX, a message is tacked onto it. The egress NPU interprets this message and then installs the wing into its local cache, which is similar to the ingress wing except that some elements are reversed. This wing is matching the destination talking to the source (see Figure 2-15 for a representation of the wing).

Session established
Figure 2-18. Session established

Now that the session is established, the data portion of the session begins, as shown in Figure 2-19 where a data packet is sent and received by the NPU. The NPU checks its local wing table and sees that it has a match and then forwards the packet to the SPU. The SPU then validates the packet, matching the packet against the session table to ensure that it is the next expected packet in the data flow. The SPU then forwards the packet out the egress NPU. (The egress NPU does not check the packet against its wing table; a packet is only checked upon ingress to the NPU.) When the egress NPU receives a return packet, it is being sent from the destination back to the source. This packet is matched against its local wing table and then processed through the system as was just done for the first data packet.

Existing session
Figure 2-19. Existing session

Last, when the session has completed its purpose, the client will start to end the session. In this case, a four-way FIN close is used. The sender starts the process, and the four closing packets are treated the same as packets for the existing session. What happens next is important, as shown in Figure 2-20. Once the SPU has processed the closing process, it shuts down the session on the SRX, sending a message to the ingress and egress NPUs to delete their wings. The SPU also sends a close message to the CP. The CP and SPU wait about eight seconds to complete the session close to ensure that everything was closed properly.

Session teardown
Figure 2-20. Session teardown

Although this seems like a complex process, it also allows the SRX to scale. As more and more SPUs and NPUs are added into the system, this defined process allows the SRX to balance traffic across the available resources. Over time, session distribution is almost always nearly even across all of the processors, a fact proven across many SRX customer deployments. Some have had concerns that a single processor would be overwhelmed by all of the sessions, but that has not happened and cannot happen using this balancing mechanism. In the future, if needed, Juniper could implement a least-connections model or least-utilization model for balancing traffic, but it has not had to as of Junos 10.2.

As mentioned earlier, the Junos 12.1X44 release offers a new way to increase session scale. This feature is off by default, but once enabled, it prevents the central point from being overwhelmed in the event that the NPU cache is exhausted.

Data Center SRX Series Hardware Overview

So far we’ve talked about the components of the data center SRX Series, so let’s start putting the components into the chassis. The data center SRX Series consists of two different lines and four different products. Although they all use the same fundamental components, they are designed to scale performance for where they are going to be deployed, and that isn’t easy. The challenge is that a single processor can only be so fast and it can only have so many simultaneous threads of execution. To truly scale to increased performance within a single device, a series of processors and balancing mechanisms must be utilized.

Because the initial design goal of the SRX was to do all of this scaling in a single product, and allow customers to choose how they wanted (and how much) to scale the device, it should be clear that the SPUs and the NPUs are the points to scale (especially if you just finished reading the preceding section).

The NPUs allow traffic to come into the SRX, and the SPUs allow for traffic processing. Adding NPUs allows for more packets to get into the device, and adding SPUs allows for linear scaling. Of course, each platform needs to get packets into the device, which is done by using interface cards, and each section on the data center SRX Series will discuss the interface modules available per platform.

SRX1000 Series

The SRX1000 line is the smallest of the three data center SRX Series lines. It is designed for the Internet edge or small data center environments. The SRX1400 product, the only product currently available in the SRX1000 line, offers some modularity but is the least flexible of the data center SRXs. The base chassis comes with a route engine (RE), a system I/O (SYSIO), and one power supply.

The RE is a computer that runs the management functions for the chassis, controlling and activating the other components in the device. All configuration management is also done from the RE. The reason it is called a route engine is because it runs the routing protocols, and on other Junos device platforms such as the M Series, T Series, and MX Series, the RE is, of course, a major part of the device. However, although SRX devices do have excellent routing support, most customers do not use this feature extensively.

The SYSIO contains several important components for the system: the data plane fabric, the control plane Ethernet network, and built-in Ethernet data ports. The SYSIO has six 10/100/1000 ports and six SFPs. There is also a second version of the SYSIO port that has nine 10/100/1000 Ethernet ports and three SFP+ ports that allow for either 1G or 10G ports. This option must be ordered from the factory.

The SRX1400 has a special card called an NSPC. This card is double-wide and fits into a single slot. It offers a lower cost card that combines both one NPC and one SPC on a single card. Alternatively, you can buy a carrier tray that lets you use a single NPC and SPC module in the chassis. This is a good option if you have other SRXs with which you want to interchange cards.

The four types of cards that the SRX1400 can use are interface cards, NPCs, and SPCs, and Table 2-14 lists the minimum and maximum number of cards per chassis by type.

Table 2-14. SRX3400 FPC numbers

Type

Minimum

Maximum

Install location

I/O card

0

1

Front slots

SPC

0

2

Any

NPC

0

1

Top right slot

NSPC

0

1

Top double-wide slot

The SRX1400 is 3 rack units high and only 14 inches deep. You could potentially put two SRX1400s back to back in a four-post rack. Figure 2-21 shows the SRX1400. As of Junos 12.1X44, you can use up to two SPCs on this SRX. The SPC can be added to the slot on the bottom left. Alternatively, you could use an I/O card in that slot. The SRX1400 is actually the rear end of an SRX3400. Effectively, that chassis was cut in half and only the rear of the device was used.

The SRX1400
Figure 2-21. The SRX1400

The performance of the SRX1400 is enough for most Internet edge or small data center applications. It offers up to 20 Gbps of firewall throughput if two SPCs are utilized. However, most customers use a single SPC to reduce the overall cost of the platform. The ability to add a second SPC offers a little room for growth.

As shown in Table 2-15, the SRX1400 can also offer both IPS and VPN up to 4 Gbps of throughput. Each number is mutually exclusive (each SPU has a limited amount of computing power). The SRX1400 can use the same interface modules as the SRX3000 series. These modules are listed in the next section.

Table 2-15. SRX1400 capacities

Type

Capacity

CPS

90,000

Maximum firewall throughput

20 Gbps

Maximum IPS throughput

4 Gbps

Maximum VPN throughput

4 Gbps

Maximum concurrent sessions

1.5 million

Maximum firewall policies

40,000

Maximum concurrent users

Unlimited

SRX3000 Series

The SRX3000 line is the middle line of the three data center SRX Series lines. It is designed for the Internet edge or large data centers. The SRX3000 products are extremely modular. The base chassis comes with an RE, a switch fabric board (SFB), and the minimum required power supplies. The RE is a computer that runs the management functions for the chassis, controlling and activating the other components in the device.

The SFB contains several important components for the system: the data plane fabric, the control plane Ethernet network, and built-in Ethernet data ports. The SFB has eight 10/100/1000 ports and four SFPs. It also has a USB port that connects into the RE and a serial console port. All products in the SRX3000 line contain the SFB. The SFB also contains an out-of-band network management port, which is not connected to the data plane, the preferred way to manage the SRX3000 line.

The SRX3400 is the base product in the SRX3000 line. It has seven FPC or flexible PIC concentrator slots (a PIC is a physical interface card, with four slots in the front of the chassis and three in the rear). The slots enable network architects to mix and match the cards, allowing them to decide how the firewall is to be configured. The three types of cards that the SRX3400 can use are interface cards, NPCs, and SPCs, and Table 2-16 lists the minimum and maximum number of cards per chassis by type.

Table 2-16. SRX3400 FPC numbers

Type

Minimum

Maximum

Install location

I/O card

0

4

Front slots

SPC

1

4

Any

NPC

1

2

Rear three

The SRX3400 is 3 rack units high and a full 25.5 inches deep. That’s the full depth of a standard four-post rack. Figure 2-22 shows the front and back of the SRX3400, in which the SFB can be seen as the wide card that is in the top front of the chassis on the left, the FPC slots in both the front and rear of the chassis, and the two slots in the rear of the chassis for the REs. You can add a CRM module in the second slot, which offers dual control ports.

The front and back of the SRX3400
Figure 2-22. The front and back of the SRX3400

Performance on the SRX3400 is impressive, and Table 2-17 lists the maximum performance. The SRX3400 is a modular platform that includes the use of four SPCs, two NPCs, and one IOC. It’s no wonder, therefore, that the SRX3400 can provide up to 175,000 new CPS, even though this is a huge number and might dwarf the performance of the branch series. The average customer might not need such rates on a continuous basis, but it’s great to have the horsepower in the event that traffic begins to flood through the device.

The SRX3400 offers an optional mode, called extreme mode, where the CPS are increased to 300,000. This converts the partial central point into a full central point, increasing the new CPS rate. Originally, this was a paid-for license, but now the feature is free.

The SRX3400 can pass a maximum of 20 Gbps of firewall throughput. This limitation comes from two components: the maximum number of NPCs, and interfaces, which limits the overall throughout. As discussed before, each NPC can take a maximum number of 6.5 million packets per second inbound, and in the maximum throughput configuration, one interface card and the onboard interfaces are used. With a total of 20 Gbps ingress, it isn’t possible to get more traffic into the box.

Table 2-17. SRX3400 capacities

Type

Capacity

CPS

180,000-300,000 (with extreme mode)

Maximum firewall throughput

20 Gbps

Maximum IPS throughput

6 Gbps

Maximum VPN throughput

6 Gbps

Maximum concurrent sessions

2.25/3 million

Maximum firewall policies

40,000

Maximum concurrent users

Unlimited

As shown in Table 2-17, the SRX3400 can also provide several other services, such as both IPS and VPN up to 6 Gbps. Each number is mutually exclusive (each SPU has a limited amount of computing power). The SRX3400 can also have a maximum of 2.25 million sessions. In today’s growing environment, a single host can demand dozens of sessions at a time, so 2.25 million sessions might not be a high enough number, especially for larger scale environments. By installing an extreme license you can boost the capacity up to 3 million sessions. The license is available free of charge.

If more performance is required, it’s common to move up to the SRX3600. This platform is nearly identical to the SRX3400, except that it adds more capacity by increasing the total number of FPC slots in the chassis. The SRX3600 has a total of 14 FPC slots, doubling the capacity of the SRX3400. This does increase the chassis height to five rack units (the depth remains the same). Table 2-18 lists the minimum and maximum number of cards by type per chassis.

Table 2-18. SRX3600 FPC numbers

Type

Minimum

Maximum

Install location

I/O card

0

6

Front slots

SPC

1

7

Any

NPC

1

3

Last rear three

As mentioned, the SRX3600 chassis is nearly identical to the SRX3400, except for the additional FPC slots. But two other items are different between the two chassis, as you can see in Figure 2-23, where the SRX3600 has an additional card slot above the SFB. Although it currently does not provide any additional functionality, a double-height SFB could be placed in that location in the future. In the rear of the chassis, the number of power supplies has doubled to four, to support the additional power needs. A minimum of two power supplies are required to power the chassis, but to provide full redundancy, all four should be utilized.

The SRX3600
Figure 2-23. The SRX3600

Table 2-19 lists the maximum performance of the SRX3600. These numbers are tested with a configuration of two 10G I/O cards, three NPCs, and seven SPCs. This configuration provides additional throughput. The firewall capabilities rise to a maximum of 30 Gbps, primarily because of the inclusion of an additional interface module and NPC. The VPN and IPS numbers also rise to 10 Gbps, whereas the CPS and session maximums remain the same. The SRX3000 line utilizes a combo-mode CP processor, where half of the processor is dedicated to processing traffic and the other to set up sessions. The SRX5000 line has the capability of providing a full CP processor.

Table 2-19. SRX3600 capacities

Type

Capacity

CPS

180,000/300,000

Maximum firewall throughput

30 Gbps

Maximum IPS throughput

10 Gbps

Maximum AppSecure throughput

25 Gbps

Maximum VPN throughput

10 Gbps

Maximum concurrent sessions

2.25/6 million

Maximum firewall policies

40,000

Maximum concurrent users

Unlimited

IOC modules

In addition to the built-in SFP interface ports, you can use three additional types of interface modules with the SRX3000 line, and Table 2-20 lists them by type. Each interface module is oversubscribed, with the goal of providing port density rather than line rate cards. The capacity and oversubscription ratings are also listed.

Table 2-20. SRX3000 I/O module summary

Type

Description

10/100/1000 copper

16-port 10/100/1000 copper with 1.6:1 oversubscription

1G SFP

16-port SPF with 1.6:1 oversubscription

10G XFP

2 × 10G XFP with 2:1 oversubscription

10G XFP+ with NPC

2 × 10G SFP+ with 2:1 oversubscription

Table 2-20 lists two types of 1G interface card, and both contain 16 1G interface slots. The media type is the only difference between the modules, and one has 16 1G 10/100/1000 copper interfaces and the other contains 16 SFP ports. The benefit of the 16 SFP interfaces is that a mix of fiber and copper interfaces can be used as opposed to the fixed-copper-only card. Both of the cards are oversubscribed to a ratio of 1.6:1.

The 2 × 10G XFP or 10 Gigabit Small Form Factor Pluggable card provides two 10G interfaces and is oversubscribed by a ratio of 2:1. Although the card is oversubscribed by two times, the port density is its greatest value because providing more ports allows for additional connectivity into the network. Most customers will not require all of the ports on the device to operate at line rate speeds, and if more are required, the SRX5000 line can provide these capabilities.

The remaining card listed in Table 2-20 is a 2 × 10G SPF+ card. This card offers not only two 10G, ports but it also includes a dedicated NPC. This is used in conjunction with the LLFW or services offload. It offers low latency stateful firewall performance. This is excellent for environments where low latency is required.

Each module has a 10G full duplex connection into the fabric. This means 10 gigabits of traffic per second can enter and exit the module simultaneously, providing a total of 20 gigabits of traffic per second that could traverse the card at the same time.

SRX5000 Series

The SRX5000 line of firewalls are the largest devices in the SRX Series, both in size and capacity. The SRX5000 line provides maximum modularity in the number of interface cards and SPCs the device can utilize, for a “build your own services gateway” approach, while allowing for expansion over time.

The SRX5000 line currently includes two different models: the SRX5600 and the SRX5800. Fundamentally, both platforms are the same. They share the same major components, except for the chassis and how many slots are available, dictating the performance of these two platforms.

The first device to review is the SRX5600. This chassis is the smaller of the two, containing a total of eight slots. The bottom two slots are for the switch control boards (SCBs), an important component in the SRX5000 line, as they contain three key items: a slot to place the RE; the switch fabric for the device; and one of the control plane networks.

The RE in the SRX5000 line is the same concept as in the SRX3000 line, providing all of the chassis and configuration management functions. It also runs the processes that run the routing protocols (if the user chooses to configure them). The RE is required to run the chassis and it has a serial port, an auxiliary console port, a USB port, and an out-of-band management Ethernet port. The USB port can be used for loading new firmware on the device, and the out-of-band Ethernet port is the suggested port for managing the SRX.

The switch fabric is used to connect the interface cards and the SPCs together, and all traffic that passes through the switch fabric is considered to be part of the data plane. The control plane network provides the connectivity between all of the components in the chassis. This gigabit Ethernet network is used for the RE to talk to all of the line cards. It also allows for management traffic to come back to the RE from the data plane. And if the RE was to send traffic, it goes from the control plane and is inserted into the data plane.

Only one SCB is required to run the SRX5600; a second SCB can be used for redundancy. (Note that if just one SCB is utilized, unfortunately the remaining slot cannot be used for an interface card or an SPC.) The SRX5600 can utilize up to two REs, one to manage the SRX and the other to create dual control links in HA.

On the front of the SRX5600, as shown in Figure 2-24, is what is called a craft port. This is the series of buttons that are labeled on the top front of the chassis, allowing you to enable and disable the individual cards. The SRX5600, unlike the SRX5800, can use 120v power, which could be beneficial in environments where 220v power is not available, or without any need to rewire electrical feeds. The SRX5600 is eight rack units tall and 23.8 inches deep.

The SRX5600
Figure 2-24. The SRX5600

The SRX5000 line is quite flexible in its configuration, with each chassis requiring a minimum of one interface module and one SPC. Traffic must be able to enter the device and be processed; hence, these two cards are required. The remaining slots in the chassis are the network administrator’s choice. This offers several important options.

The SRX5000 line has a relatively low barrier of entry because just a chassis and a few interface cards are required. In fact, choosing between the SRX5600 and the SRX5800 comes down to space, power, and long-term expansion.

For space considerations, the SRX5600 is physically half the size of the SRX5800, a significant fact considering that these devices are often deployed in pairs, and that two SRX5800s take up two thirds of a physical rack. In terms of power, the SRX5600 can run on 110v, whereas the SRX5800 needs 220v.

The last significant difference between the SRX5600 and the SRX5800 data center devices is their long-term expansion capabilities. Table 2-21 lists the FPC slot capacities of the SRX5600. As stated, the minimum is two cards, one interface card and one SPC, leaving four slots that can be mixed and matched among cards. Because of the high-end fabric in the SRX5600, placement of the cards versus their performance is irrelevant. This means the cards can be placed in any slots and the throughput is the same, which is important to note because in some vendors’ products, maximum throughput will drop when attempting to go across the back plane.

Table 2-21. SRX5600 FPC numbers

Type

Minimum

Maximum

Install location

FPC slots used

1 (SCB)

8

All slots are FPCs

I/O card

1

5

Any

SPC

1

5

Any

SCB

1

2

Bottom slots

In the SRX5800, the requirements are similar. One interface card and one SPC are required for the minimum configuration, and the ten remaining slots can be used for any additional combination of cards. Even if the initial deployment only requires the minimum number of cards, it still makes sense to look at the SRX5800 chassis. It’s always a great idea to get investment protection out of the purchase. Table 2-22 lists the FPC capacity numbers for the SRX5800.

Table 2-22. SRX5800 FPC numbers

Type

Minimum

Maximum

Install location

FPC slots used

2 (SCBs)

14

All slots are FPCs

I/O card

1

11

Any

SPC

1

11

Any

SCB

2

3

Center slots

The SRX5800 has a total of 14 slots, and in this chassis, the 2 center slots must contain SCBs, which doubles the capacity of the chassis. Because it has twice the number of slots, it needs two times the fabric. Even though two fabric cards are utilized, there isn’t a performance limitation for going between any of the ports or cards on the fabric (this is important to remember, as some chassis-based products do have this limitation). Optionally, a third SCB can be used, allowing for redundancy in case one of the other two SCBs fails.

Figure 2-25 illustrates the SRX5800. The chassis is similar to the SRX5600, except the cards are positioned perpendicular to the ground, which allows for front-to-back cooling and a higher density of cards within a 19-inch rack. At the top of the chassis, the same craft interface can be seen. The two fan trays for the chassis are front-accessible above and below the FPCs.

The SRX5800
Figure 2-25. The SRX5800

In the rear of the chassis there are four power supply slots. In an AC electrical deployment, three power supplies are required, with the fourth for redundancy. In a DC power deployment, the redundancy is 2 + 2, or two active supplies and two supplies for redundancy. Check with the latest hardware manuals for the most up-to-date information.

Optionally, you can use the NG-PSU or next-generation power supply units. These units offer 2 + 2 redundancy as they provide more power per power supply.

The performance metrics for the SRX5000 line are very impressive, as listed in Table 2-23. The CPS rate maxes out at 350,000, which is the maximum number of packets per second that can be processed by the central point processor. This is three per CPS multiplied by 350,000, or 1.05 million packets per second, and subsequently is about the maximum number of packets per second per SPU. Although this many connections per second is not required for most environments, at a mobile services provider, a large data center, or a full cloud network—or any environment where there are tens of thousands of servers and hundreds of thousands of inbound clients—this rate of CPS might be just right.

Table 2-23. SRX5000 line capacities for original SPC

Type

SRX5600 capacity

SRX5800 capacity

CPS

380,000

380,000

Maximum firewall throughput

70 Gbps

150 Gbps

Maximum IPS throughput

12 Gbps

26 Gbps

Maximum VPN throughput

15 Gbps

30 Gbps

Maximum concurrent sessions

9 million

12.5/20 million

Maximum firewall policies

80,000

80,000

Maximum concurrent users

Unlimited

Unlimited

For the various throughput numbers shown in Table 2-23, each metric is doubled from the SRX5600 to the SRX5800, so the maximum firewall throughput number is 70 Gbps on the SRX5600 and 150 Gbps on the SRX5800. This number is achieved using HTTP large gets to create large stateful packet transfers; the number could be larger if UDP streams are used, but that is less valuable to customers, so the stateful HTTP numbers are utilized. The IPS and VPN throughputs follow the same patterns. These numbers are 15 Gbps and 30 Gbps for each of these service types on the SRX5600 and SRX5800, respectively.

It is possible to increase the session capacity on the SRX5800 from 12.5 million sessions up to 20 million sessions. This requires eight SPCs and then enabling the max sessions knob in the CLI.

The IPS throughput numbers are achieved using the older NSS 4.2.1 testing standard. Note that this is not the same test that is used to test the maximum firewall throughput. The NSS test accounts for about half of the possible throughput of the large HTTP transfers, so if a similar test were done with IPS, about double the amount of throughput would be achieved.

These performance numbers were achieved using two interface cards and four SPCs on the SRX5600. On the SRX5800, four interface cards and eight SPCs were used. As discussed throughout this section, it’s possible to mix and match modules on the SRX platforms, so if additional processing is required, more SPCs can be added. Table 2-24 lists several examples of this “more is merrier” theme.

Table 2-24. Example SRX5800 line configurations

Example network

IOCs

SPCs

Goal

Mobile provider

1

6

Max sessions and CPS

Financial network

2

10

Max PPS

Data center IPS

1

11

Maximum IPS inspection

Maximum connectivity

8 flex IOCs

4

64 10G interfaces for customer connectivity

A full matrix and example use cases for the modular data center SRX Series could fill an entire chapter in a how-to data center book. Table 2-24 highlights only a few, the first for a mobile provider. A mobile provider needs to have the highest number of sessions and the highest possible CPS, which could be achieved with six SPCs. In most environments, the total throughput for a mobile provider is low, so a single IOC should provide enough throughput.

In a financial network, the packets-per-second (PPS) rate is the most important metric. To provide these rates, two SPCs are used, each configured using NPU bundling to allow for 10 Gbps ingress of small 64-byte packets. The 10 SPCs are used to provide packet processing and security for these small packets.

In a data center environment, an SRX can be deployed for IPS capabilities only, so here the SRX would need only one IOC to have traffic come into the SRX. The remaining 11 slots would be used to provide IPS processing, allowing for a total of 45 Gbps IPS inspection in a single SRX. That is an incredible amount of inspection in a single chassis.

The last example in Table 2-24 is for maximum connectivity. This example offers sixty-four 10G Ethernet ports. These ports are oversubscribed at a ratio of 4:1, but again the idea here is connectivity. The remaining four slots are dedicated to SPCs. Although the number of SPCs is low, this configuration still provides up to 70 Gbps of firewall throughput. Each 10G port could use 1.1 Gbps of throughput simultaneously.

NG-SPC

Because the needs for service providers and high-end data centers are always growing, Juniper focuses on innovating new products for the future. The most important area of growth is the SPC as it is the largest bottleneck in the SRX. The Next Generation SPC (NG-SPC) is a new product that is being launched for the SRX in early 2013. This card provides an extreme boost to the performance of the SRX5000 series. Unlike the original SPC, it contains four SPUs, and each SPU is a new processor that is a newer generation of chip than the one used on the original SPC.

The projected performance at launch of the NG-SPC is a considerable boost over the existing cards. They are projected to do a minimum of 5 Mpps for firewall versus 2 Mpps on the existing cards. On new CPS, one NG-SPC card is capable of doing 240,000 new CPS. This is a 100 percent improvement over the existing cards as well. On the services side, IPsec boosts up to 16 Gbps and IPS gets bested to between 11 Gbps and 5 Gbps. These numbers are all preliminary for the new card, as this is based on the 12.1X44 release of software. Expect these things to increase over time, and look for official up-to-date numbers on Juniper’s website.

Due to the additional processor capabilities, an existing SRX5000 series chassis will need a slight upgrade to support these new cards. The fans and power supplies are needed to provide additional cooling and power for these high compute capable cards. Currently, these processors are only for the SRX5000, but expect them to trickle through to the other data center SRX products over time.

IOC modules

The SRX5000 line has three types of IOCs, two of which provide line rate throughput while the remaining is oversubscribed. Figure 2-26 illustrates an example of the interface complex of the SRX5000 line. The image on the left is the PHY, or physical chip, that handles the physical media. Next is the NPU or network processor. The last component is the fabric chip. Together, these components make up the interface complex. Each complex can provide 10 Gbps in both ingress and egress directions, representing 20 Gbps full duplex of throughput.

Interface complex of the SRX5000 line
Figure 2-26. Interface complex of the SRX5000 line

Each type of card has a different number of interface complexes on it, with Table 2-25 listing the number of interface complexes per I/O type. Each complex is directly connected to the fabric, meaning there’s no benefit to passing traffic between the complexes on the same card. It’s a huge advantage of the SRX product line because you can place any cards you add anywhere you want in the chassis.

Table 2-25. Complexes per line card type

Type

Complexes

4 × 10G

4

40 × 1G

4

Flex IOC

2

NG-IOC

4

The most popular IOC for the SRX is the four-port 10 gigabit card. The 10 gigabit ports utilize the XFP optical transceivers. Each 10G port has its own complex providing 20 Gbps full duplex of throughput, which puts the maximum ingress on a 4 × 10G IOC at 40 Gbps and the maximum egress at 40 Gbps.

The second card listed in Table 2-25 is the 41 gigabit SFP IOC. This blade has four complexes, just as the four-port 10 gigabit card has, but instead of four 10G ports, it has ten 1G ports. The blade offers the same 40 Gbps ingress and 40 Gbps egress metrics of the four-port 10 gigabit card, but this card also supports the ability to mix both copper and fiber SFPs.

The Flex IOC card has two complexes on it, with each complex connected to a modular slot. The modular slot can utilize one of three different cards:

  • The first card is a 16-port 10/100/1000 card. It has 16 tri-speed copper Ethernet ports. Because it has sixteen 1G ports and the complex it is connected to can only pass 10 Gbps in either direction, this card is oversubscribed by a ratio of 1.6:1.

  • Similar to the first card is the 16-port SFP card. The difference here is that instead of copper ports, the ports utilize SFPs and the SFPs allow the use of either fiber or copper transceivers. This card is ideal for environments that need a mix of fiber and copper 1G ports.

  • The last card is the dense four-port 10G card. It has four 10-gigabit ports. Each port is still an XFP port. This card is oversubscribed by a ratio of 4:1 and is ideal for environments where connectivity is more important than line rate throughput.

Summary

Juniper Networks’ SRX Series Services Gateways are the company’s next-generation firewall offerings. Juniper brings the Junos OS onto the SRX, enabling carrier-class reliability. This chapter introduced a multitude of platforms, features, and concepts; the rest of the book will complete your knowledge in all of the areas that have been introduced here. The majority of the features are shared across the platforms, so as you read through the rest of the book, you will be learning a skill set that you can apply to small hand-sized firewalls as well as larger devices. Your journey through the material might seem great, but the reward will be great as well. The concepts in this book apply not only to the SRX, but to all of the products in the Junos product line.

Study Questions

Questions

  1. Which of the SRX platforms can use WAN interfaces?

  2. What are the Ethernet switching restrictions on the branch SRX Series?

  3. What is the true cutoff limit for using a branch device in a branch and not using it in a larger environment such as a data center?

  4. The SRX5000 line seems to have “too much” performance. Is such a device needed?

  5. What is the biggest differentiator between the branch SRX Series and data center SRX Series platforms?

  6. Which SRX platforms support the UTM feature set?

  7. Why can’t the data center SRX Series manage the AX411 Wireless LAN Access Point?

  8. For how long a term can you purchase a license for a Junos feature?

  9. What does a Services Processing Card do?

  10. What is the benefit of the distributed processing model on the data center SRX Series?

Answers

  1. The SRX210, SRX220, SRX240, SRX550, and SRX650 can use WAN interfaces. These are part of the branch SRX Series. As they are placed in a branch, they are more likely to be exposed to non-Ethernet interfaces and need to accommodate various media types.

  2. Ethernet switching can only be done across the same card. It is not possible to switch across multiple line cards. The branch SRX Series devices use a switching chip on each of their interface modules. Switched traffic must stay local to the card. It is possible to go across cards, but that traffic will be processed by the firewall.

  3. It’s possible to place a branch device in any location. The biggest cutoff typically is the number of concurrent sessions. When you are unable to create new sessions, there isn’t much the firewall can do with new traffic besides drop it. The second biggest limit is throughput. If the firewall can create the session but not push the traffic, it doesn’t do any good. If a branch SRX Series product can meet both of these needs, it might be the right solution for you.

  4. The SRX5800 can provide an unprecedented amount of throughput and interface density. Although this device might seem like overkill, in many networks it’s barely enough. Mobile carriers constantly drive for additional session capacity. In data center networks, customers want more throughput. It’s not the correct device for everyone, but in the correct network, it’s just what is needed.

  5. The data center SRX Series devices allow the administrator to increase performance by adding more processing. All of the branch devices have fixed processing.

  6. Only the branch SRX Series devices support UTM. The focus was for a single small device to handle all of the security features for the branch. As of the writing of this book, the UTM feature set is in beta for the high-end or data center devices.

  7. It doesn’t make sense for the data center SRX Series to manage the AX411 because of the typical deployment location for the product. Although it is technically possible, it is not a feature that many people would want to use, and hence Juniper didn’t enable this.

  8. The maximum length a Juniper license can be purchased for is five years.

  9. A Services Processing Card on the data center SRX Series enables the processing of traffic for all services. All of the services available on the SRX, such as IDP, VPN, and NAT, are processed by the same card. There is no need to add additional cards for each type of service.

  10. The distributed processing model of the data center SRX Series allows the device to scale to an unprecedented degree. Each component in the processing of a flow optimizes the processing capabilities to allow you to add more than a dozen processors to the chassis, with equal distribution of sessions across all of the cards.