The most thorough explanation on the whole network Netty, a high-performance network application framework, this is the only one

Netty is a high-performance network application [framework] , which is widely used. At present, in the field of Java, Netty has basically become the standard configuration of network programs. The Netty framework is rich in functions and very complex. Today, we mainly analyze the threading model in the Netty framework, and the threading model directly affects the performance of the network program.

Before introducing Netty’s threading model, we first need to clarify the problem, understand where the bottleneck of [network programming performance is, and then look at how Netty’s threading model solves this problem.]

The bottleneck of network programming performance

Earlier, we wrote a simple network program echo, which used blocking I/O (BIO). In the BIO model, all read() operations and write() operations will block the current thread. If the client has established a connection with the server and does not send data, the read() operation of the server will always block, Therefore, using the BIO model, an independent thread is generally allocated to each socket, so that the reading and writing of other sockets will not be affected because the thread is blocked on one socket. The thread model of BIO is shown in the figure below, each socket corresponds to an independent thread; in order to avoid frequent creation and consumption of threads, a thread pool can be used, but the correspondence between sockets and threads does not change.

BIO’s threading model

The BIO threading model is suitable for scenarios where there are not many socket connections; however, in current Internet scenarios, servers are often required to support 100,000 or even millions of connections, and it is obviously unrealistic to create 100,000 or even millions of threads, so BIO threads Models cannot solve the problem of millions of connections. If you look closely, you will find that in the Internet scenario, although there are many connections, the requests on each connection are not frequent, so the thread spends most of the time waiting for I/O to be ready. That is to say that the thread is blocked there most of the time, which is a complete waste, if we can solve this problem, there will be no need for so many threads.

Following this idea, we can optimize the threading model as shown in the figure below. We can use one thread to handle multiple connections, so that the utilization rate of threads will increase, and the number of threads required will also decrease. This idea is very good, but it is impossible to use BIO-related APIs. Why? Because BIO-related socket read and write operations are blocking, and once the blocking API is called, the calling thread will be blocked until the I/O is ready, and other socket connections cannot be processed.

Fortunately, Java also provides a non-blocking (NIO) API. Using the non-blocking API, one thread can handle multiple connections. How to achieve it? Reactor mode is now generally used, including the implementation of Netty. Therefore, in order to understand the implementation of Netty, we need to understand the Reactor pattern first.

Reactor pattern

The following is the class structure diagram of Reactor mode, where Handle refers to the I/O handle. In Java network programming, it is essentially a network connection. Event Handler is easy to understand, it is an event handler, in which the handle_event() method handles I/O events, that is, each Event Handler handles an I/O Handle; the get_handle() method can return the I/O Handle. Synchronous Event Demultiplexer can be understood as the I/O multiplexing API provided by the operating system, such as select() in the POSIX standard and epoll() in Linux.
The core of the Reactor mode is naturally the Reactor class, in which the register_handler() and remove_handler() methods can register and delete an event handler; the handle_events() method is the core and the engine of the Reactor mode. The core logic of this method is as follows : First, listen for network events through the select() method provided by the synchronous event multiplexer. When a network event is ready, it will traverse the event handler to process the network event. Since the network events are continuous, to start the Reactor mode in the main program, you need to call the handle_events() method in the form of while(true){}.

void Reactor : : handle_events()
{
    /*
     * Provided via Sync Event Multiplexer
     * select() method listens for network events
     */ 
    select ( handlers );
     /* handle network events */ 
    for ( h in handlers )
    {
        h.handle_event();
    }
}
/* start the event loop in the main program */ 
while ( true )
{
    handle_events();

Threading Model in Netty

Although the implementation of Netty refers to the Reactor mode, it is not completely copied. The core concept in Netty is the event loop (EventLoop), which is actually the Reactor in the Reactor mode, which is responsible for monitoring network events and calling the event handler for processing. In the 4.x version of Netty, the network connection and EventLoop are in a stable many-to-one relationship, while EventLoop and Java threads are in a one-to-one relationship. Stability here means that once the relationship is determined, it will not change. That is to say, a network connection can only correspond to a single EventLoop, and an EventLoop can only correspond to a Java thread, so a network connection can only correspond to a Java thread.

A network connection corresponds to a Java thread, what’s the benefit? The biggest advantage is that the event processing for a network connection is single-threaded, which avoids various concurrency problems.

The threading model in Netty can refer to the following figure. This figure is very similar to the ideal threading model figure we mentioned earlier. The core goal is to use one thread to handle multiple network connections.
Another core concept in Netty is EventLoopGroup. As the name suggests, an EventLoopGroup consists of a set of EventLoops. In actual use, two EventLoopGroups are generally created, one is called bossGroup and the other is called workerGroup. Why are there two EventLoopGroups?

This is related to the mechanism of socket processing network requests. The socket handles TCP network connection requests in an independent socket. Whenever a TCP connection is successfully established, a new socket will be created, and then the read and write of the TCP connection will be This is done by the newly created socket. That is to say, the processing of TCP connection requests and read and write requests is done through two different sockets. When we discussed network requests above, in order to simplify the model, we only discussed read and write requests, but not connection requests.

In Netty, bossGroup is used to handle connection requests, and workerGroup is used to handle read and write requests. After the bossGroup processes the connection request, it will submit the connection to the workerGroup for processing. There are multiple EventLoops in the workerGroup. Which EventLoop will the new connection be handed over to handle it? This requires a load balancing algorithm, and the round-robin algorithm is currently used in Netty.

Let’s use Netty to re-implement the server side of the following echo program, and feel Netty up close.

Implementing Echo Program Server with Netty

The following sample code implements the echo program server based on Netty: first, an event handler (equivalent to the event handler in Reactor mode) is created, then bossGroup and workerGroup are created, and then ServerBootstrap is created and initialized. The code is still very Simple, but there are two things to note.

First, if the NettybossGroup only listens on one port, then the bossGroup only needs one EventLoop, which is a waste.

Second, by default, Netty will create “2*CPU cores” EventLoop. Since the network connection has a stable relationship with EventLoop, the event processor cannot have blocking operations when processing network events, otherwise it will be very difficult. It is easy to cause a large area of ​​request timeout. If it is absolutely unavoidable to use blocking operations, it can be processed asynchronously through a thread pool.

/* Event handler*/ 
final EchoServerHandler serverHandler
    = new EchoServerHandler();
 /* boss thread group*/
EventLoopGroup bossGroup
    = new NioEventLoopGroup( 1 );
 /* worker thread group*/
EventLoopGroup workerGroup
    = new NioEventLoopGroup();
try {
    ServerBootstrap b = new ServerBootstrap();
    b.group( bossGroup, workerGroup )
    .channel( NioServerSocketChannel.class )
    .childHandler( new ChannelInitializer<SocketChannel>()
               {
                   @Override
                   public void initChannel( SocketChannel ch )
                   {
                       ch.pipeline().addLast( serverHandler );
                   }
               } );
    /* bind server port*/ 
    ChannelFuture f = b.bind( 9090 ).sync();
    f.channel().closeFuture().sync();
} finally {
     /* Terminate the worker thread group */
    workerGroup.shutdownGracefully();
    /* Terminate the boss thread group */
    bossGroup.shutdownGracefully();
}

/* socket connection handler*/ 
class  EchoServerHandler  extends 
ChannelInboundHandlerAdapter  {
     /* Handle read events*/ 
    @Override 
    public  void  channelRead (
        ChannelHandlerContext ctx, Object msg )
    {
        ctx.write( msg );
    }


    /* Handle read completion event*/ 
    @Override 
    public  void  channelReadComplete (
        ChannelHandlerContext ctx )
    {
        ctx.flush();
    }


    /* Handle exception events*/ 
    @Override 
    public  void  exceptionCaught (
        ChannelHandlerContext ctx, Throwable cause )
    {
        cause.printStackTrace();
        ctx.close();
    }
}

Summarize

Netty is an excellent network programming framework with very good performance. In order to achieve the goal of high performance, Netty has made a lot of optimizations, such as optimizing ByteBuffer, supporting zero copy, etc., and its threading model is related to concurrent programming. Netty’s threading model is designed very delicately. Each network connection is associated with a thread. The advantage of this is that for a network connection, read and write operations are performed in a single thread, thus avoiding various problems of concurrent programs. .

You can give a thumbs up if it helps you~

Leave a Comment

Your email address will not be published. Required fields are marked *