1. Spring boot is an interesting new framework which brings all the power of the spring framework but with very minimal startup costs. Starting up a new application is extremely easy. Vertx is the node.js of the Java world, with multiple language support.

    If your requirement involves accepting HTTP/REST requests, integrating with multiple middleware services or API's to get back a coherent response, then Vertx is the tool for you. Spring boot on the other hand brings in all sorts of enterprise services like security, JMS, jee integration etc.

    Below is some example code which accepts an HTTP asynchronous request. The DeferredResult is what allows the servlet to respond asynchronously. Full code can be viewed here on Github
    
    @RequestMapping(value = '/posts/{id}', method = RequestMethod.GET)
    public DeferredResult<ResponseEntity<String>> thirdPartyJson(@PathVariable("id") long id) 
    { 
     DeferredResult<ResponseEntity<String>> deferredResult = new DeferredResult<>()
     // send it out to 3rd party json provider asynchronously
     client.getNow("/posts/${id}") { resp ->
      println "Got response ${resp.statusCode}"
      resp.bodyHandler { body ->
       // Publish messages via event bus
       eventBus.publish("hello.listeners", body.toString())
       HttpHeaders headers = new HttpHeaders()
       headers.set("Content-Type", MediaType.APPLICATION_JSON_VALUE)
       ResponseEntity<String> responseEntity = new ResponseEntity<>(body.toString(), headers, HttpStatus.CREATED)
       deferredResult.setResult(responseEntity)
      }
     }
     return deferredResult;
    }
    
    
    
    The closure provided for getNow is executed asynchronously. So there is no blocking involved. It can be used for calling multiple backend services, accessing db and so on.

    The eventBus part of the code can be used to send messages to any other vertx applications running on the same network. This is an extremely powerful feature since you can use standalone vertx as your API server and client, but use spring boot with embedded vertx for db, security etc and use the event bus to communicate freely between the two applications. In short, get the best of both worlds.

    An example vertx script is provided in scripts folder which can accept messages from the main project. To execute it run vertx -cluster run EventSink.groovy from the command line. Now you will be able to receive the Json messages sent from the spring boot application in this vertx standalone EventSink process. This assumes that vertx is installed locally.

    The spring setup for instantiating the eventBus and httpClient spring beans can be seen in the GreetingApplication.groovy file. The gradle build file and pom.xml has the necessary information on how the language was setup for Groovy.

    To run this spring-boot application from command line, first execute gradle build from command prompt and then follow it by java -jar build/libs/spring-boot-vertx-0.0.3-SNAPSHOT.jar.

    Assumptions are that you have java 1.7+, vertx, gradle and maven installed locally.
    0

    Add a comment


  2. Introduction

    Storm is a big data project which can provide results in real time, compared to the offline batch processing of Hadoop. More importantly storm is easy to understand and has a lower learning curve than other similar solutions. It boasts a very clean, simple api, good documentation and a thriving community.

    Fundamental to Storm is the concept of a Streams, Spouts and Bolts, ample documentation is provided on the wiki, mostly a spout can be considered to be a source of your data, think JMS, DB. Bolts are where you place your transformation and business logic and topologies are used to link these together using groupings. Streams are unbounded sequence of tuples emitted/consumed by spouts and bolts.

    In this post, I am going to create a Spout using Java chronicle. Chronicle can be used for ultra fast persisted disk storage of records. For e.g. on normal hardware, writing a billion records to disk will take it less than an hour. Readers can then read from it again at millions/per second. So even a single Chronicle based Spout can theoretically feed a reasonably huge deployment of storm containing several machines.

    The example code of this post is hosted on github.
    So assume that your users are using tv remotes to surf channels and you would like to find out the following information
    1. How much time did a channel get viewed overall
    2. How much time did each user view a particular channel
    3. How many times did the users switch channels, i.e. total click count.

    Information Source a.k.a Producer a.k.a Spout

    Since this example is processing a tv remotes click information, the actual source of data might be transmitted via cable or satellite to the media provider's database/file. But since I don't intend to launch my own satellite for this post, here is some code that will setup the input data.
    
    chronicle = new IndexedChronicle(basePath);
    excerpt = chronicle.createExcerpt();
    for (int i = 0; i < 10000; i++) {
     excerpt.startExcerpt(20);// three integers
     TvRemote click = TvRemote.createRandomRecord();
     Integer channelId = click.getChannelId();
     excerpt.writeInt(click.getUser());
     excerpt.writeInt(channelId);
     excerpt.writeInt(click.getDuration());
     excerpt.finish();
    }
    


    Storm now uses this ChronicleSpout's nextTuple method to get the source stream of tuples. On invocation of this method, ChronicleSpout will read a record from the previously created data file and provide it to Storm as a tuple.
    @Override
    public void nextTuple() {
     if (excerpt.nextIndex()) {
      int userId = excerpt.readInt();
      int channelId = excerpt.readInt();
      int duration = excerpt.readInt();
      if (channelId == 0) {
       // poison, set countdown LATCH.
       StartStorm.LATCH.countDown();
      } else {
       collector.emit(new Values(userId, channelId, duration));
      }
     } else {
      // dont busy spin, not actually required here since we have the
      // poison object.
      Utils.sleep(500);
     }
    }

    Using emit method on the collector, the Spout will signal its downstream Bolts about new data that has become available. The way this was emitted is known as un-anchored emit. To read more about anchoring and message delivery guarantees see this wiki page.
    To store meta information about which fields are getting emitted and in which order, the ChronicleSpout class uses the declareOutputFields method  as shown below.
    @Override
    public void declareOutputFields(OutputFieldsDeclarer declarer) {
     declarer.declare(new Fields("userId", "channelId", "duration"));
    }

    In a real world deployment the spout will be serialized and sent across to different jvm's, potentially running in different machines altogether. For this reason, the class needs to be serializable. Anything like a connection, file etc which are non-serializable needs to be marked as transient. To create the connection again on any jvm or machine, Storm uses the open method.

    Now this raises an interesting question, the chronicle is IO bound to some local file, so if Storm loads it on a completely different machine, how is it supposed to open up the data file on that machine? So either you need to have a network file system mount in place and read from a common location irrespective of machine or the second choice is to use an InProcessChronicleSink to connect to a ChronicleSource running at some arbitrary host and port somewhere within your network.

    This sink will now be the Producer as far your Spout is concerned by getting records and writing them to the chronicle data file. The reading part does not change. It still reads from the file this Sink is writing to. Here's a dzone post about how its done, but using standalone processes for sink and source. The concept is the same though.

    Data Analytic's

    As per requirement we need to come out with 3 different transformed outputs for the same input data and here is where the Storm topology and api shine through. This transformation is done via Bolts. Lets consider the case where we want to know which channels were viewed for how long across the entire user base i.e. requirement 1. The following Bolt does this processing The input for this bolt is the tuple emitted by the spout. Since we are only interested in channel and duration of view time, we ignore the first param user id.
    @Override
    public void execute(Tuple input) {
     Integer channelId = input.getInteger(1);
     Integer viewTime = channelViewTimeMap.get(channelId);
     if (null == viewTime) {
      viewTime = input.getInteger(2);
     } else {
      viewTime += input.getInteger(2);
     }
     channelViewTimeMap.put(channelId, viewTime);
     collector.emit(new Values(channelId, viewTime));
     collector.ack(input);
    }

    This view time metric is internally stored as a HashMap, no concurrency headaches involved. This is another feature of Storm, only one thread accesses the bolt logic at any time, hence users can write simple, single threaded code.
    Using different bolts, we are able to process and retrieve different types of information. All the logic within a Bolt goes inside the execute method. The emit method is used to send data to any downstream bolts. Storm has an algorithm by which it does ack's very efficiently without consuming much memory. If you don't care about acks then probably you would want to extend the BaseBasicBolt class which takes care of it internally.

    Topology

    It is the logical structure linking spouts and bolts. TopologyBuilder is used to build topologies by linking together spouts and bolts. Here, the aim is to build a topology that looks like this.

    As you notice from the code within bolts, there is some grouping's happening. For e.g UserChannelViewTimeBolt groups the amount of time an individual user has viewed a channel in an internal HashMap variable  private Map<Integer, Map<Integer,Integer>> userChannelViewMap. Since storm will be running this Bolt/Task in a number of vm's with various parallelism, it is necessary to ensure that a record emitted for a particular user and a channel always go to the same Bolt instance, otherwise the metric will be wrong. The way to do this is by using groupings feature of the TopologyBuilder class.
    TopologyBuilder builder = new TopologyBuilder();
    builder.setSpout("chronicle", new ChronicleSpout());
    builder.setBolt("view-time-by-user", new UserChannelViewTimeBolt(), 2)
     .fieldsGrouping("chronicle", new Fields("userId", "channelId"));
    builder.setBolt("view-time", new ChannelViewTimeBolt(), 2)
     .fieldsGrouping("chronicle", new Fields("channelId"));
    
    The UserChannelViewTime bolt does a fieldsGrouping so that tuples with the same user id and channel id fields will always reach the same bolt instance. ChannelViewTimeBolt on the other hand does not care about which user, since its calculating the overall view duration across users, hence its fieldsGrouping only contains channelid.

    The grouping is where the magic of linking spouts and bolts occurs. It provides a strategy by which tuples emitted by the spout get routed to the appropriate Bolt. There are other groupings beside fieldsGrouping Here is a wiki link with information about various Storm groupings, or you can write your own.

    Execution

    The StartStorm class is responsible for creating the topology and starting it locally. The LocalCluster is used for this. It is an extremely useful feature for testing out your topology before deploying it in scale across multiple machines. More details about storm installation and deployment in production can be found on this page. In a real world app, use StormSubmitter to submit the topology to the cluster like this.
    StormSubmitter.submitTopology("mytopology", conf, topology);

    0

    Add a comment


  3. This example is based on the following post. The idea was to take a simple single-player client only game and make it a  multi-player game. For this purpose I have used a java game serverNadron(formerly jetserver) which is a wrapper over Netty4 with some built in semantics for games creation.

    The lostdecade blog post provides a good write up on the single player game . Basically it has a monster, a hero and the hero catching the monster. To make it multi-player, a few changes were done to the original. The full java script file can be found here.

    Javascript Client

    The main changes are as follows to convert to multi-player
    1) The state of players, which is their x,y co-ordinates on the screen needs to be captured, for this a new variable var players = [] is added. i.e. an array of all players.
    2) In the single player version, the main game loop was invoking a render function to draw all the images on screen. But now, this function is invoked in response to server events. The following line of code does that.
    
    function sessionCreationCallback(session){
        session.onmessage = render;
        ...
    3) The update function and render function code was modified so that it will handle the array of players instead of a single player.
    4) A config object is created to login to remote Nadron server, it basically contains the username, password and the game room to which this player is trying to login.
    // simple object used to store credentials for logging in to remote Nadron server
    var config = {
        user: "user",
        pass: "pass",
        connectionKey: "LDGameRoom"
    };
    5) For network communication with remote Nadron server, the following script was included in the LDGame html file.

    So how does the data go from client to server and back? 

    Lets do it line by line.
    The following lines takes care of logging into remote server
    
    // connect to remote Nadron server.
    function startGame(){
        nad.sessionFactory("ws://localhost:18090/nadsocket", config, sessionCreationCallback);
    }
    
    Notice the callback function? When session is created it will be invoked and in turn set the render method as the message listener on the newly created session.
    
    // This is the callback function that gets invoked
    // when session is connected to Nadron server.
    function sessionCreationCallback(session){
        session.onmessage = render;
    

     So, now whenever the server sends data to browser, the render method will be invoked. The sessionCreationCallback function also sets up the 'gameloop' at the client side using javascript setInterval. This loop will continuously invoke the update function which will capture the current position of the hero and transmit it to Nadron using the following lines
    
    var message = {
        "hero": hero
    };
    // send new position to gameroom
    session.send(nad.NEvent(nad.NETWORK_MESSAGE, message));
    

    That's it at the client side. To summarize, we use a list to store all players, render is now invoked via server events and update takes care of sending json objects back to the server. All cool! so now lets move on to the server side of things.

    Nadron Server

    At the server side, game logic is written in the package lostdecade. The Entity and LDState are straight forward game beans, by which we can hold player information, no rocket science there. In fact, no rocket science anywhere!
    The LDEvent is a bit more interesting, even though the class has only a getter and setter, notice that it inherits from the DefaultEvent class which in turn implements the Event interface. Nadron communicates using events, hence this hierarchy. So the whole purpose of this class is to just "wrap" the game state(LDState class in our example) for network communication, that's it.

    This brings us to LDRoom which is actually the only logic bearing class in this whole game. For those un-initiated to Nadron/jetserver, GameRooms' are just a grouping mechanism for Player Sessions. Since its kind of a central part which is seen by all players, its also a good fit for game logic.
    When a player logs in, the onLogin method gets invoked on the GameRoom, so we override it to do all the "connections".

    A player session which is just logged in has no event handlers attached, so any data that is sent by browser/client will not get handled. So our first order of business is to add a handler to handle data and events, the following lines of code show how to do that.
    
    // Add a session handler to player session. So that it can receive events.
    playerSession.addHandler(new DefaultSessionEventHandler(playerSession) {
        @Override
        protected void onDataIn(Event event) {
            if (null != event.getSource()) {
                // Pass the player session in the event context so that the
                // game room knows which player session send the message.
                event.setEventContext(new DefaultEventContext(playerSession, null));
                // pass the event to the game room
                playerSession.getGameRoom().send(event);
            }
        }
    });
    

    As is visible from inline comments, these 2 lines just pass the incoming data to the game room and ensure that a "context" is attached so that the game room knows who is speaking!

    The other 2 actions of the onLogin method are 1) initialize game objects for this newly logged in player and 2) inform everyone else that a new kid is in town i.e do a broadcast. Code below
    
    // Now add the hero to the set of entities managed by LDGameState
    Entity hero = createHero(playerSession);
    LDGameState state = (LDGameState) getStateManager().getState();
    state.getEntities().add(hero);
    // We do broadcast instead of send since all connected players need to
    // know about the new player's arrival so that this hero can be drawn on
    // their screens.
    sendBroadcast(Events.networkEvent(new LDGameState(state.getEntities(),
        state.getMonster(), hero)));

    GameLogic 

    The inner class GameSessionHandler in LDRoom has most of the game logic. This handler is a special handler which will only see events of type SESSION_MESSAGE, reason is that a game room is not interested in other events, at least for this simple game. Notice the below line in the LDRoom constructor?
    
    addHandler(new GameSessionHandler(this));
    
    That's where this handler starts listening on the game room. So data flow is something like this now
    Incoming network data -> Player Session -> Session Event Handler(anonymous inner class in onLogin method) -> Game Room -> GameSessionHandler.

    The onEvent method of GameSessionHandler will receive events from the player session and invoke the update method which is just to check if the hero and monster are touching. If they are, then state is reset, i.e, monster is thrown randomly on the screen somewhere and the heroes are all put in the middle of the screen. If they are not touching the new x,y co-ordinates of this hero is transmitted to all other players so that they can update their screens.
    Thats it game over!
    ...

    Configuration

    Ok I lied, So who configured this room? how was the client able to login with that specific room name? Where is the main class?
    If you take a look at the SpringConfig class you will notice 2 beans toward the end of the file, 1 is the ldGame and the other is the ldGameRoom
    
    public @Bean(name = "LDGame")
    Game ldGame()
    {
        return new SimpleGame(2, "LDGame");
    }
    
    public @Bean(name = "LDGameRoom")
    GameRoom ldGameRoom()
    {
        GameRoomSessionBuilder sessionBuilder = new GameRoomSessionBuilder();
        sessionBuilder.parentGame(ldGame()).gameRoomName("LDGameRoom")
            .protocol(webSocketProtocol);
        LDRoom room = new LDRoom(sessionBuilder);
        return room;
    }
    

    These are the methods which are responsible for creating the room beans, the lookupservice bean will now register the room in its map with the name "LDGameRoom" the same name that's used by client when logging in, this is how the lookup happens. Note, that in a more sophisticated environment, you will be picking up these values from a DB, rather than use a map.
    The main class is GameServer which as you can notice is loading the spring context.
    
    //Initialize spring context to load games and game rooms and other beans.
    AbstractApplicationContext ctx = new AnnotationConfigApplicationContext(SpringConfig.class);
    // For the destroy method to work.
    ctx.registerShutdownHook();
    // Start the main game server
    ServerManager serverManager = ctx.getBean(ServerManager.class);
    serverManager.startServers();
    

    Execution

    First start the server, the GameServer is the main class and you can start it from eclipse. Be sure to run it with the following vm configuration -Dlog4j.configuration=GameServerLog4j.properties otherwise it wont log properly.

    If you want to start from command prompt, then please include the nadron jar and the dependent libraries in the path.
    The client can be started by right clicking on the LDGame.html file and running it in a HTML5 compatible browser.
    Below screenshot shows multiple players on the board. Reason is that I opened this game in multiple browser tabs and started playing simultaneously

    Advanced

    This section mostly covers data transfer over network and related netty protocols. If you want to know the internals of Nadron and how it leverages Netty you might want to take a look at this wiki page. The page is a little bit outdated since it was written for jetserver but the core concepts are very much the same.

    The data is transferred from client to server using json using a simple JSON.stringify(e).At the server side, things are a bit more complicated since the Jackson parser which deserializes this json to Java object needs to know which class to deserialize to. For this reason the client needs to send an initial class name event to server, the following line does that.
    
    // Send the java event class name for Jackson to work properly.
    session.send(nad.CNameEvent("io.nadron.example.lostdecade.LDEvent"));
    

    At the server side Jackson library is used to convert the incoming json to LDEvent. The TextWebsocketDecoder class in the Netty pipeline does that. Here is the sample code.
    @Override
    protected void decode(ChannelHandlerContext ctx, TextWebSocketFrame frame,
     MessageList<Object> out) throws Exception
    {
       // Get the existing class from the context. If not available, then
       // default to DefaultEvent.class
       Attribute<Class<? extends Event>> attr = ctx.attr(eventClass);
       Class<? extends Event> theClass = attr.get();
       boolean unknownClass = false;
       if (null == theClass)
       {
          unknownClass = true;
          theClass = DefaultEvent.class;
       }
       String json = frame.text();
       Event event = jackson.readValue(json, theClass);
       ...
    

    Netty pipeline structure

    Following Websocket protocol code shows the encoders and decoders in the server pipeline which handle network communication from client. Unless you need your specific wire protocol you shouldn't have to touch this part.
    @Override
    public void applyProtocol(PlayerSession playerSession)
        LOG.trace("Going to apply {} on session: {}", getProtocolName(),
     playerSession);
        ChannelPipeline pipeline = NettyUtils.getPipeLineOfConnection(playerSession);
        pipeline.addLast("textWebsocketDecoder", textWebsocketDecoder);
        pipeline.addLast("eventHandler", new DefaultToServerHandler(
     playerSession));
        pipeline.addLast("textWebsocketEncoder", textWebsocketEncoder);
        ...
    

    What if I want to change the port, other configuration, deploy my own protocol?

    Spring to the rescue, all the configuration is in the following resources folder. You can override(actually hide) whichever bean you want by re-defining same bean in your SpringConfig file. The configuration like port numbers etc are in this file, again overridable by re-declaring the properties bean.

    How about multiple games and game rooms on the same server?

    The SpringConfig file already hosts 2 games, with 3 different protocols and multiple game rooms. Take a look at the following beans in this file to see how its done.
    
    public @Bean
    Game zombieGame() {
    ...
    public @Bean(name = "Zombie_Rooms")
    List<GameRoom> zombieRooms() {
    ...
    public @Bean(name = "Zombie_Room_Websocket")
    GameRoom zombieRoom2() {
    ...
    public @Bean(name = "LDGame")
    Game ldGame() {
    ...
    public @Bean(name = "LDGameRoom")
    GameRoom ldGameRoom() {
    ...
    
    public @Bean(name = "lookupService")
    LookupService lookupService() {
    The LookupService then stores all the rooms in a map and client can decide which one to log in to. Depending on the clients choice, the network protocol will change, for e.g. if client chooses the room Zombie_Room_1 then it has to use a binary protocol instead of web socket. But if it chooses Zombie_Room_Websocket then it can play the same game but with web socket protocol.

    Troubleshooting

    1. Client not connecting to server - are the port numbers on both sides correct? default port is 18090
    2. Data is not received by server/client - check log files or put a break point in TextWebsocketDecoder/Encoder, LDRoom handlers etc to see where it is getting dropped.
    3. At client side put break point in the render function to view incoming data.
    4. For more information on the network packets, follow this tutorial on how to setup eclipse to trace tcp traffic.



    6

    View comments


  4. This post shows how to configure a Netty 4 project using Spring 3.2+ and Maven. The source  is checked in at github. The spring configuration uses pure Java configuration with no xml involved. At the end of this post, the corresponding xml configuration is also shown.

    Eclipse Project Layout

    Netty
    In order to start a netty server the general steps are:-
    • Create a ServerBootstrap object
    • Configure parameters like bossGroup, workerGroup, childHandler and set ChannelOptions like keep alive on this bootstrap.
    • Create channel using this bootstrap.
    • On application exit ensure that you call shutDownGracefully or similar method on the NioEventLoopGroup's for a clean exit.
    So assuming that you have created this bootstrap in Spring, then the following few lines of code are enough to start the server and stop it.
    @PostConstruct
    public void start() throws Exception {
        serverChannel = b.bind(tcpPort).sync().channel().closeFuture().sync().channel()
    }
    
    @PreDestroy
    public void stop() {
        serverChannel.close();
    }

    Where b is the bootstrap that's injected by Spring configuration. You can view the full code at TCPServer.

    Spring
    The spring configuration for Netty is a little bit involved. It has to configure the ServerBootstrap, the ChannelInitializer and the ChannelOption's beans along with a few decoders and encoders so that the example is more thorough. Here is the line by line explanation of the spring configuration. The full code for this file is at SpringConfig

    Annotations
    @Configuration
    @ComponentScan("org.nerdronix")
    @PropertySource("classpath:netty-server.properties")
    public class SpringConfig { ...
    

    The @Configuration annotation ensures that this java file is treated similar to a spring application-context.xml file. The component scan feature is used to configure a package or packages that will be scanned by spring for autowiring. Note that any POJO which needs to be spring managed should be annotated with @Component or a sub annotation. More on the @PropertySource annotation later on.
    Now we move on to step 2, i.e. the actual bean instantiation process happening in SpringConfig.

    Bean Creation
    The following code shows how to create a bean.

    @Bean(name = "bossGroup", destroyMethod = "shutdownGracefully")
    public NioEventLoopGroup bossGroup() {
        return new NioEventLoopGroup(bossCount);
    }
    
    The @Bean annotation is used to signify that the method will return a bean, the destroyMethod parameter to the annotation means that this method will be invoked by spring on app exit. This is an optional parameter, even the name is optional.
    So where is the bossCount coming from? To explain, we move to step 3.

    PropertySource Configuration.
    Spring 3.1 introduced this approach whereby a bean could have its properties injected in a very easy manner using the following 2 annotations, @PropertySource and @Value

    @PropertySource("classpath:netty-server.properties")
    ...
    ...
    @Value("${boss.thread.count}")
    private int bossCount;
    
    Additionally for a pure java configuration solution, you also need to configure the following bean in the SpringConfig class otherwise the properties won’t be evaluated correctly by @Value annotation.

    /**
    * Necessary to make the @Value annotations work.
    *
    */
    @Bean
    public static PropertySourcesPlaceholderConfigurer propertyPlaceholderConfigurer() {
        return new PropertySourcesPlaceholderConfigurer();
    }
    
    The corresponding entry in the netty-server.properties file is boss.thread.count=2. Notice that spring auto converted the property value from string to integer, it does this for all common types, so that you dont have to do a Integer.valueOf(string); method call.

    Netty Bootstrap configuration
    Below snippet shows how the netty bootstrap is configured.
    @Bean(name = "serverBootstrap")
    public ServerBootstrap bootstrap() {
        ServerBootstrap b = new ServerBootstrap();
        b.group(bossGroup(), workerGroup())
            .channel(NioServerSocketChannel.class)
            .childHandler(protocolInitalizer);
        Map, Object> tcpChannelOptions = tcpChannelOptions();
        Set keySet = tcpChannelOptions.keySet();
        for (@SuppressWarnings("rawtypes")
            ChannelOption option : keySet) {
                b.option(option, tcpChannelOptions.get(option));
        }
        return b;
    }
    

    The protocolInitalizer you can see is actually a POJO located in another package here. So how do we get that bean here in SpringConfig class? The answer is that SpringConfig's @Configuration also extends @Component which means that SpringConfig is also a spring bean class, and we can auto-wire other beans into this bean, just like any other normal spring bean. 
    The  following lines in SpringConfig file take care of that
    @Autowired
    @Qualifier("springProtocolInitializer")
    private StringProtocolInitalizer protocolInitalizer;
    

    The whole beauty of this Ioc approach is that all the beans are by default singletons without any special effort from your side, spring's component scan feature takes care of wiring application beans auto-magically and properties are injected with ease using the @PropertySource and @Value annotations. And by  following coding to interfaces paradigm you can make your code truly modular.

    Maven
    The pom is rather self-explanatory; you need to add the dependencies for spring, netty and logging to make it work. The main dependencies are provided below, the parent pom has all the versions and common dependencies.
    <dependency>
        <groupId>io.netty</groupId>
        <artifactId>netty-all</artifactId>
        <version>${netty.version}</version>
    </dependency>
    
    <dependency>
        <groupId>org.springframework</groupId>
        <artifactId>spring-context</artifactId>
        <version>${org.springframework.version}</version>
    </dependency>
    


    Finally, Execution!
    Goto the Main class and execute it, the Netty server should start at the port configured in netty-server.properties, i.e localhost:8090.

    I Want my XML!
    Ok, you are still sooo 2009! Actually, I still love the xml for all its bean graphs and built in tools, though the pure java option is definitely winning me over. Anyway, here goes: Its not a complete example but shows the most interesting beans.
            <bean id="bossGroup" class="io.netty.channel.nio.NioEventLoopGroup">
            <constructor-arg type="int" index="0" value="${boss.thread.count}" />
            <constructor-arg index="1" ref="bossThreadFactory" />
            </bean>
    
            <bean id="workerGroup" class="io.netty.channel.nio.NioEventLoopGroup">
                <constructor-arg type="int" index="0" value="${worker.thread.count}" />
                <constructor-arg index="1" ref="workerThreadFactory" />
            </bean>
    
            <bean id="bossThreadFactory" class="org.nerdronix.NamedThreadFactory">
                <constructor-arg type="String" value="Server-Boss" />
            </bean>
    
            <bean id="workerThreadFactory" class="org.nerdronix.NamedThreadFactory">
                <constructor-arg type="String" index="0" value="Server-Worker" />
            </bean>
    
              
            <!-- Netty options for server bootstrap -->
            <util:map id="tcpChannelOptions" map-class="java.util.HashMap">
                <entry>
                    <key><util:constant static-field="io.netty.channel.ChannelOption.SO_KEEPALIVE"/></key>
                    <value type="java.lang.Boolean">${so.keepalive}</value>
                </entry>
                <entry>
                    <key><util:constant static-field="io.netty.channel.ChannelOption.SO_BACKLOG"/></key>
                    <value type="java.lang.Integer">${so.backlog}</value>
                </entry>
            </util:map>
    
    5

    View comments

  5. Recently I was stuck with an issue when porting an existing java game server from Netty 3.x to 4.x api. A binary message that was sent from server was not reaching the client. After a lot of digging around, I realized that it was indeed reaching the client but for some reason the length of the message was getting messed up. This in turn made the LengthFieldBaseFrameDecoder to wait for subsequent data without passing it on to next handler in the chain.
    Now in 3.x it was easy to see the contents of ChannelBuffer, not so in Netty 4 ByteBuf. So there was no way in which I could understand what was going on even with low level debugging. Enter eclipse detail formatter feature. This awesome debugging tool allows you to re-write the toString of a class to suit your needs. Here is how I did it.
    Step 1
    In the debug window, right-click on the object you need to have detail on. Then click on "New Detail Formatter" option
    Step 2
    In the window that opens, up provide your toString code. Note that the object you want to add toString is referred to as "this" This window also supports content assist so using ctrl + space is available. Here is the code I used which I got from here.

    
    byte[] bytes = new byte[this.readableBytes()];
    this.getBytes(0,bytes,0,this.readableBytes());
    char[] hexArray = {'0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'};
    char[] hexChars = new char[bytes.length * 2];
    int v;
    for ( int j = 0; j &lt; bytes.length; j++ ) {
        v = bytes[j] &amp; 0xFF;
        hexChars[j * 2] = hexArray[v &gt;&gt;&gt; 4];
        hexChars[j * 2 + 1] = hexArray[v &amp; 0x0F];
    }
    return new String(hexChars);
    


    Step 3
    So, now when you click on the ByteBuf you can actually see the binary data inside.

    And the issue I was debugging? Well it turned out that I was using ChannelHandlerContext.write instead of channel.write and this skipped the channel handlers in the pipeline(like LengthFieldPrepender) and instead wrote directly to the network causing corrupted data to be sent from server to client. One line change and bug disappears! Hopefully not transforming into a new one!



    0

    Add a comment

Loading