One of the best new features of Bro 2.0 is the logging framework. It gives you structured logs which are easily parsed for simplified log analysis. It also provides a nice abstraction between writing something to a log and handling that data before it is written to disk. I’ll provide a very brief overview of the logging framework and then go into some filters that I’ve been helping people with lately.

The logging framework in Bro 2.0 is based around sets of key-value pairs. This alone was a huge step for Bro and helps bring it into the modern day since Bro logs now conceptually map neatly into all table and document store databases. To take it further, we wanted to separate the actions of sending data off to be logged and handling how that data is written to a data store (e.g. text files on disk). When data for a log is ready to be written out, log records are written to "Logging Streams" which can then be filtered, modified, and redirected with "Logging Filters".

The need to apply a custom filter can arise from a number of functionality requirements:
  • Prevent logging of data that can’t be logged for privacy reason.
  • Pre-splitting logs to ease searching.
  • Splitting logs to direct some of them to external data stores. I’m not showing any examples of this though since Bro 2.0 only support textual logs.

Example 1

The first example is for a user that wants to split their HTTP logs into something that they can manage and search more easily. Initially they decide to just split logs into "inbound" requests and "outbound" requests. The following filter requires that the Site::local_nets variable is configured appropriately which it will be automatically if you run Bro with BroControl and have your local networks defined in <prefix>/etc/networks.cfg.

event bro_init()
        {
        # First remove the default filter.
        Log::remove_default_filter(HTTP::LOG);
        # Add the filter to direct logs to the appropriate file name.
        Log::add_filter(HTTP::LOG, [$name = "http-directions",
                                    $path_func(id: Log::ID, path: string, rec: HTTP::Info) = {
                                        return (Site::is_local_addr(rec$id$orig_h) ? "http_outbound" : "http_inbound");
                                    }]);
        }

With that code added to local.bro or another custom script, Bro will output two HTTP logs: http_inbound.log and http_outbound.log. The log files are created dynamically as they are needed so it’s possible that you may not see them if there isn’t appropriate traffic to create them.

Taking another step, that same user might also realize that anytime a Windows executable over HTTP transits their monitoring point they want it written to a separate log file in addition to the inbound or outbound log. The file type detection is based on the contents of the HTTP response too so it won’t be misled by ‘Content-Type’ headers or odd URLs.

The next block of code adds a second filter to the HTTP::LOG stream which is executed separately and therefore is able to duplicate logs.

event bro_init()
        {
        # First remove the default filter.
        Log::remove_default_filter(HTTP::LOG);
        # Add the filter to direct logs to the appropriate file name.
        Log::add_filter(HTTP::LOG, [$name = "http-directions",
                                    $path_func(id: Log::ID, path: string, rec: HTTP::Info) = {
                                        return (Site::is_local_addr(rec$id$orig_h) ? "http_outbound" : "http_inbound");
                                    }]);

        # Add a filter to pull Windows PE executables into a separate log.
        Log::add_filter(HTTP::LOG, [$name = "http-executables",
                                    $path = "http_exe",
                                    $pred(rec: HTTP::Info) = { return rec?$mime_type && rec$mime_type == "application/x-dosexec"; }]);
        }
}}}

With that, a Bro installation will end up with three log files for HTTP traffic (assuming the correct traffic is seen): http_inbound.log, http_outbound.log, and http_exe.log. The lines in http_exe.log will be duplicated in their appropriate "inbound" or "outbound" log.

There are a number of cases where sites either can’t or won’t log outbound requests to avoid intruding on their user’s privacy. You can accomodate that by applying a predicate ($pred) function to the filter that splits the log into inbound and outbound. We will return false (F) for the predicate whenever the originator of the connection was local to prevent the log record from proceeding.

event bro_init()
        {
        # First remove the default filter.
        Log::remove_default_filter(HTTP::LOG);
        # Add the filter to direct logs to the appropriate file name.
        Log::add_filter(HTTP::LOG, [$name = "http-directions",
                                    $pred(rec: HTTP::Info) = {
                                        return ! Site::is_local_addr(rec$id$orig_h);
                                    },
                                    $path_func(id: Log::ID, path: string, rec: HTTP::Info) = {
                                        return (Site::is_local_addr(rec$id$orig_h) ? "http_outbound" : "http_inbound");
                                    }]);

        # Add a filter to pull Windows PE executables into a separate log.
        Log::add_filter(HTTP::LOG, [$name = "http-executables",
                                    $path = "http_exe",
                                    $pred(rec: HTTP::Info) = { return rec?$mime_type && rec$mime_type == "application/x-dosexec"; }]);
        }

The above filters will result in two log files with the right traffic: "http_outbound.log" and "http_exe.log". The log with Windows executables will still contain outbound requests as long as a windows executable was returned because the predicate on that filter only prevents records that didn’t result in a Windows EXE from the server.

Now, we’ve barely scratched the surface of filtering for the logging framework. Perhaps a few more examples?

Example 2

I created some filters recently for Doug Burks’ excellent Security Onion Linux distribution, to help with data management. He came to me recently letting me know that he needed to know the host interface which saw the traffic resulting in any particular log record. The way Bro clusters normally work is that logs output by any worker are merged together into single logs on the manager which theoretically loses the information he needs. It turns out that the logging framework could cope with this. Specifically he needed the HTTP logs and Conn logs identified by their interface and here is the script that implements it.

event bro_init()
        {
        if ( reading_live_traffic() )
                {
                Log::remove_default_filter(HTTP::LOG);
                Log::add_filter(HTTP::LOG, [$name = "http-interfaces",
                                            $path_func(id: Log::ID, path: string, rec: HTTP::Info) =
                                                {
                                                local peer = get_event_peer()$descr;
                                                if ( peer in Cluster::nodes && Cluster::nodes[peer]?$interface )
                                                        return cat("http_", Cluster::nodes[peer]$interface);
                                                else
                                                        return "http";
                                                }
                                            ]);

                Log::remove_default_filter(Conn::LOG);
                Log::add_filter(Conn::LOG, [$name = "conn-interfaces",
                                            $path_func(id: Log::ID, path: string, rec: Conn::Info) =
                                                {
                                                local peer = get_event_peer()$descr;
                                                if ( peer in Cluster::nodes && Cluster::nodes[peer]?$interface )
                                                        return cat("conn_", Cluster::nodes[peer]$interface);
                                                else
                                                        return "conn";
                                                }
                                            ]);
                }
        }

What this script does is looks up the interface from the cluster configuration for the host that most recently sent an event and appends that interface name to the log name. Most people won’t need this filter because it’s fairly specific to what Doug is trying to accomplish on Security Onion, but I wanted to point it out because this is not something we ever envisioned doing with the logging framework but it works flawlessly.

Example 3

There is one last demonstration filter that I wanted to show for filtering DNS logs, but it’s actually very applicable to HTTP and SSL as well. Someone came to me recently because they were using Bro 2.0 to monitor their DNS server and they wanted to filter their DNS logs into separate logs based on if a name being requested is in a local or nonlocal zone. Here is the script I wrote.

redef Site::local_zones = { "example.com", "example.org" };

event bro_init()
        {
        Log::remove_default_filter(DNS::LOG);
        Log::add_filter(DNS::LOG, [$name = "dns_split",
                                   $path_func(id: Log::ID, path: string, rec: DNS::Info) = {
                                        return (Site::is_local_name(rec$query) ? "dns_localzone" : "dns_remotezone"); }]);
        }

You need to be sure to fill in all of your most top level DNS zones in the Site::local_zones variable as I have done at the top of the script. All this is doing is removing the default DNS filter and applying a new filter which selectively guides logs into either a file named "dns_localzone.log" or "dns_remotezone.log" depending on if the name is contained within one of your configured local zones.

Example 4

Ok, I lied. That wasn’t the last filter. I want to show two more small filtering tricks before wrapping this up. Sometimes the default logs contain more information than you are allowed to log or have disk space to store. In this case you can selectively choose to include or exclude fields in the output. Here is an example of only logging timestamps, querying IP address, and query for the DNS log.

event bro_init()
        {
        Log::remove_default_filter(DNS::LOG);
        Log::add_filter(DNS::LOG, [$name="new-default",
                                   $include=set("ts", "id.orig_h", "query")]);
        }

That will result in only those three fields in your dns.log file.

In some cases sites can’t log the subject of email in SMTP traffic which is included in the default SMTP logs. It’s also easy to just remove a single field and leave all other fields in a log intact. Here’s an example which removes just the subject field from the SMTP logs.

event bro_init()
        {
        Log::remove_default_filter(SMTP::LOG);
        Log::add_filter(SMTP::LOG, [$name="new-default",
                                    $exclude=set("subject")]);
        }

Wrap up

Hopefully this post will inspire people to ask more questions about how to filter logs in even trickier ways and really press the logging framework in new and unexpected ways. At least it should give you a few examples to copy&paste and then build off of as you customize Bro’s output to suit your requirements. Keep in mind that the filtering and redirection techniques from the examples can be combined in various ways.

For further information on Bro’s logging framework you can find our full documentation here: http://www.bro-ids.org/documentation/logging.html

%d bloggers like this: