Announcing Neutrino for Load Balancing and L7 Switching

The eBay PaaS team is pleased to announce the open-sourcing of Neutrino, a software load balancer developed to do L7 switching and load balancing for eBay’s test infrastructure. Built on Scala and Netty, Neutrino uses the Java Virtual Machine as its run-time environment. It can run on any commodity hardware as long as JVM 7+ is available.

Why another SLB?

Our team was looking for options to replace eBay’s hardware load balancers, which are expensive, inflexible, and unable to keep up with the demand, especially in our test environments. We considered two options: either use an open-source product like HAProxy, or else build an in-house solution.

At a high level, the SLB would need to satisfy the following requirements:

  • L7 switching using canonical names
  • L7 switching using canonical names and URL context
  • L7 switching based on rules – for example, routing traffic based on the HTTP header, authentication header value, etc.
  • L4 switching
  • Ability to send the traffic logs to API endpoints
  • Automated cluster management using eBay PaaS; the SLB should be able to read the network topology (which is stored in a database and can be accessed through API) and reconfigure itself
  • Support for the most common load-balancing algorithms, such as Least Connection and Round Robin. The framework should also be extensible to add more algorithms in the future.
  • Ability to run on bare metal, a VM, or a container
  • No loss of traffic during reconfiguration

HAProxy is the most commonly used SLB across the industry. It is written in C and has a reputation for being fast and efficient (in terms of processor and memory usage). It can do L4 switching as well as L7 switching using canonical names and URL context. However, HAProxy does not satisfy the requirement for L7 switching based on rules, sending logs to API endpoints, or adding new load-balancing algorithms. Reading the configuration from a database or an API can be achieved through another application, but this solution is not optimal. We found that extending HAProxy to support these features would be tough, as would be adding more load-balancing algorithms. For these reasons, our team needed to think about developing an SLB in-house.

Neutrino was built in Scala, using the Netty server, with the above requirements in mind. Neutrino can do L7 routing using canonical names, URL context, and rules. Its highly extensible pipeline architecture enables new modules to be hooked into the pipeline with minimal effort. Developers can add new switching rules and load-balancing options easily. New modules can be added to send logs to API endpoints or to load the configuration file from a database or API. Because Neutrino uses the JVM run-time environment, developers can use either Java or Scala to add modules.

Key advantages of Neutrino

  • Modular and pluggable architecture
    • Neutrino supports easy extensibility for new routing and resolving policies.
    • A customizable pipeline enables adding new modules in the request and response channel.
  • Switching options
    • L7 routing can be based on canonical names, URL context, or rules.
    • L4 routing is achieved by multiple port numbers instead of floating IPs.
  • Pluggable pipeline
    • New components for the pipeline are easy to build.
    • The run-time environment is JVM, and so these components can be built on any language that runs on the JVM.
  • Horizontal scalability
    • Any number of SLBs can run in parallel, in containers, in VMs, or on bare metal.
  • High availability and performance
    • Due to its distributed architecture, Neutrino is highly available, and can support very high throughput levels. We have measured upwards of 300+ requests per second on a 2-core VM.
    • Reconfiguration of the Neutrino cluster – including addition of machines, rules, etc. – causes no loss of traffic.
  • Traffic metrics and configuration APIs
    • Traffic metrics and configuration are exposed as APIs.
    • Metrics can be easily published to Graphite. Neutrino is also extensible to push metrics to other metrics systems.

When to use Neutrino

Neutrino’s SLB strength lies in its programmability and customizable nature. It is very easy to customize and adapt to an existing topology. The Neutrino documentation includes examples of customizations.

Neutrino can run in a Docker container or a VM. Since Neutrino is built as a library, it can also be easily integrated and shipped along with other applications.


Neutrino can be dynamically re-configured during run time without bringing down the connections. By default, Neutrino can read the configuration from a file. Configuration can be updated either within a given interval or on demand through an API. It is also very easy to move the configuration to a database or an API resource.

Load balancing and switching

By default, Neutrino supports two load-balancing options:

  • Round Robin
  • Least Connection

Support for additional options is provided by adding a custom load balancer.

Neutrino also supports three switching options by default:

  • L7 switching using canonical names – With this option, every HTTP request will have a URL address in the HOST header. Neutrino can look at the header and route traffic to the appropriate pool. For example, two pools could have their own canonical names: and
  • L7 context-based switching – This option is useful if the same canonical name is used for two services, but with different URL suffixes. For example, could point to one pool, and could point to the other.
  • L4 switching based on port numbers – In this case, every service will be identified using a port number, and traffic coming to each port number will be sent to the corresponding pool. For example, could point to one pool, and could point to another.

If none of the above switching options satisfies the requirements, custom pool resolvers can be added.

The following diagram illustrates the switching options.


Additional information

Git Hub repository:

Neutrino website:

Thanks to Chris Brawn and Srivathsan Canchi for their contributions to this project.