Skip to content

1.3.0

Compare
Choose a tag to compare
@jacknagz jacknagz released this 06 Jun 18:44
· 1615 commits to master since this release

New Features

New Schema Options

Log schemas now support list, boolean, and float types for more accurate schemas (#77). As records are parsed by the rule_processor, fields will now cast into these new types to be referenced by rules.

Example Schema:

  "carbonblack:feed.storage.hit.process": {
    "schema": {
      "sensor_id": "integer",
      "report_score": "integer",
      "from_feed_search": "boolean",
      "feed_id": "integer",
      "ioc_type": "string",
      "ioc_attr": {},
      "docs": [],
      "group": "string",
      "server_name": "string",
      "hostname": "string",
      "feed_name": "string",
      "cb_server": "string",
      "timestamp": "float",
      "process_guid": "string",
      "interface_ip": "string",
      "type": "string"
    },
    "parser": "json"
    }
  }

Example rule:

@rule(logs=['carbonblack:feed.storage.hit.process'],
      matchers=[],
      outputs=['slack:soc', 'pagerduty:soc'])
def cb_storage_hit_process(rec):
    """This event occurs when an intelligence feed indicator matches a new process upon ingest. """

    return (
      rec['from_feed_search'] == True and
      len(rec['docs']) > 1
    )

Additionally, to handle logs with optional keys, a new parser option optional_top_level_keys has been added (#95). At a minimum, an incoming record must contain the keys defined in the schema, and if any of the defined optional_top_level_keys do not exist, an empty default value (per the defined type) will be added to the parsed record. This is to ensure rules do not reference keys that may not exist and subsequently result in an exception.

Example Schema:

  "github:enterprise": {
    "schema": {
      "@timestamp": "string",
      "@version": "integer",
      "host": "string",
      "message": "string",
      "port": "integer",
      "received_at": "string",
      "tags": []
    },
    "parser": "json",
    "configuration": {
      "optional_top_level_keys": {
        "logsource": "string",
        "pid": "integer",
        "program": "string",
        "timestamp": "string"
      }
    }
  }

This schema supports the following logs:

[
  {
    "message": "github_audit message",
    "@version": "1",
    "@timestamp": "2015-05-20T20:00:36.731Z",
    "host": "10.0.0.1",
    "port": 59310,
    "tags": [],
    "received_at": "2015-05-20T20:00:36.731Z",
    "timestamp": "May 20 20:00:36",
    "logsource": "github",
    "program": "github_audit"
  },
  {
    "message": "github_audit message",
    "@version": "1",
    "@timestamp": "2015-05-20T20:00:36.731Z",
    "host": "10.0.0.1",
    "port": 59310,
     "pid": 1599,
    "tags": [],
    "received_at": "2015-05-20T20:00:36.731Z",
    "timestamp": "May 20 20:00:36",
    "logsource": "github",
    "program": "github_audit"
  }
]

Disable Rules

To quickly disable rules without deleting them, a new decorator (@disable) has been added (#75). Note: This decorator must be right above the @rule decorator with no spaces:

Example rule:

rule = StreamRules.rule
disable = StreamRules.disable()

@disable
@rule(logs=['carbonblack:feed.storage.hit.process'],
      matchers=[],
      outputs=['slack:soc', 'pagerduty:soc'])
def cb_storage_hit_process(rec):
    """This event occurs when an intelligence feed indicator matches a new process upon ingest. """

    return (
      rec['from_feed_search'] == True and
      len(rec['docs'] > 1
    )

When @disable is being used, make sure to update the integration test to not expect an alert to trigger:

{
  "records": [
    {
      "data": {...},
      "description": "CB Feed Storage Hit Process should not trigger an alert",
      "trigger": false,
      "source": "my_s3_bucket",
      "service": "s3"
    }
  ]
}

Slack Message Format

Messages sent to Slack outputs are now formatted using mrkdwn styling, and sent as a series of attachments (#135).

Example output:
slack_example

Modular Outputs

Adding new outputs for supported services is now as easy as running:

$ python stream_alert_cli.py output new --service slack

This will create a new Slack integration. Prompts will then walk through entering any information required for the service. The currently supported services as of this release are: AWS Lambda, AWS S3, Pagerduty, Phantom, and Slack.

As an added bonus, these changes allow rules to send alerts to multiple configured outputs per service. For example, a rule could previously only send to one 'destination' in Slack, but can not send to multiple configured webhooks per service. To send to different integrations in Slack, a user would simply add them to the rule, like so:

@rule(logs=['carbonblack:feed.storage.hit.binary'],
      matchers=[],
      outputs=['slack:alerts_channel', 'slack:direct_message', 'pagerduty:corp_alerts'])
def cb_feed_storage_hit_binary_virustotal(rec):
    """Identify binaries that match against the virustotal feed"""

    return (
        rec['type'] == 'feed.storage.hit.binary' and
        rec['feed_name'] == 'virustotal'
    )

The StreamAlert output classes have also been refactored to easily enable the addition of new output services (#97). The documentation has been updated to demonstrate this new extensibility along with providing a walkthrough of how to implement a new service to send alerts to.

Support SNS inputs and S3/Lambda Outputs

To promote Serverless Service Oriented Architectures, StreamAlert now has the ability to accept input from arbitrary AWS SNS topics (#118/#119) and invoke arbitrary AWS Lambda functions as an output (#110).
To enable StreamAlert to accept input from SNS topics, modify the conf/inputs.json file, and terraform will automatically handle subscribing to the topic(s).

Example of adding an SNS input:

{
  "aws-sns": {
    "our_sns_input": "arn:aws:sns:us-east-1:012345678912:sns-topic-name"
  }
}

As stated in the Modular Outputs section above, users can add AWS Lambda functions that they would like to utilize as outputs via the stream_alert_cli.py tool. This is accomplished by simply running the following command and following the prompts:

$ python stream_alert_cli.py output new --service aws-lambda

Example:

$ python stream_alert_cli.py output new --service aws-lambda
StreamAlertCLI [INFO]: Issues? Report here: https://github.com/airbnb/streamalert/issues

Please supply a short and unique descriptor for this Lambda function configuration
(ie: abbreviated name): external-lambda-function

Please supply the AWS arn, with the optional qualifier, that represents the Lambda function
to use for this configuration (ie: arn:aws:lambda:aws-region:acct-id:function:output_function:qualifier): 
arn:aws:lambda:us-east-1:012345678912:function:my_function:Production

StreamAlertCLI [INFO]: Successfully saved 'external-lambda-function' output configuration
for service 'aws-lambda'
StreamAlertCLI [INFO]: Completed

Bug Fixes

#126, #137, #147, #161 - StreamAlert performance improvements thanks to @ryandeivert!
#100 - Check Slack message size before sending, and appropriately split long messages.
#79 - Does not upload the Lambda deployment package if pip fails to install dependencies.