VPC Network Access Management — Traffic Analysis

Working Programmer. Programmer Showing Code Issue on the Screen.

VPCs are a great way to isolate network resources. However, these resources will occasionally need to communicate with entities outside of the VPC.

This could be a package update server or a customer visiting a hosted web site. AWS provides a number of components which allow this access while still providing as isolated an environment as possible. The VPC described in this article is a simple setup containing a single Linux and Windows server.

To start though, it’s important to understand what exactly is being protected against. VPC flow logs allow insight into what traffic is allowed and what is being blocked. ElasticSearch and Kibana will be utilized to analyze the data. The ElasticSearch domain will be setup ahead of time since CloudWatch will require the cluster to be active when streaming logs to it.

Basic ElasticSearch configuration

Navigating to the ElasticSearch console will bring up existing domains if there are any, or the “getting started” page if none exist yet. First, the domain needs to be named. In this case “vpc-flow-logs” will be used:

VPC flow logs

Next, the cluster details need to be defined. Depending on how much VPC traffic there is, a larger-sized instance or more nodes may be required. As the VPC for purposes of this article has a small number of resources the defaults are suitable:

VPC node configuration

Next comes setting up permissions, which can become an interesting ordeal. For most use cases, the following will be required:

  • A policy that allows an AWS account or IAM user access to ElasticSearch
  • A policy that allows access from a specific IP

The reason for the IP policy is that Kibana is making calls to the ElasticSearch API. In order for this to occur it needs to communicate with some kind of restriction in place. This is either a signed request or restricting to certain IPs. As the browser is unable to do signed requests without some form of an add-on, an IP restriction will be necessary. To do so first select the “Allow access to the domain from specific IPs:

VPC domain access

This will bring up a dialog that requests the IP address needed. Please note that the service does not understand IPv6 addresses so only enter an IPv4 address.

As for the IP address to use its recommended to utilize the “My IP” constraint feature of security groups in a separate tab, as that’s how the server will see the HTTP(S) traffic originating from. Once the IP address is obtained fill it out into the field:

VPC IP address

Now, copy the resulting Statement value and paste it into a temporary location. Then go back and select “Allow or deny access to one or more AWS accounts or IAM users” from the list of policies in the dropdown. From here, indicate either the account ID, account ARN, or the IAM user ARN:

VPC User access

The ending policy should look something like this:

Once done setting up the policy, click “Next” which leads to the final step. After reviewing all the information and verifying it looks okay, click on “Confirm and create”. The ElasticSearch main page will appear and show the domain status as loading:

Elastic main page

This will take some time to create. Wait until the page shows the domain as “Active”, which is required to stream the VPC flow logs to it. While the ElasticSearch service is working on making the domain active, an IAM policy can be created for the Lambda which will stream the logs to ElasticSearch.

 Lambda permissions

To stream VPC flow logs to the ElasticSearch domain, CloudWatch invokes a Lambda function. This function will require permissions to access the ES domain as follows:

Replace [your-account-id-here] with the AWS account ID the ElasticSearch cluster lives in, and vpc-flow-logs if the domain used in the basic setup is different. Also recommended is adding the AWSLambdaBasicExecutionRole policy template so that Lambda can write logs useful for debugging issues writing to the ElasticSearch domain. Finally, the role will need a trust policy to allow Lambda to assume the role:

Now that this is done, it’s time to wait for the ElasticSearch domain to be in the active state.

Setting up VPC flow logs

After some time, the domain should show up in Active state:

VPC Flow Logs

Now, a CloudWatch log group needs to be setup for the VPC flow logs to write to. After navigating to the CloudWatch console, click on “Logs”, then select “Create log group” from the “Actions” dropdown:

VPC actions

A dialog will pop up asking for the log group name. vpc-flow-log-group will be used as an easy to discern identifier:

VPC create log group

Now that the log group is created, it’s time to setup the VPC flow logs. This can be done at various levels of the VPC, but to see all the traffic the VPC top level will be used. First navigate to the VPC console to get a list of VPCs:

VPC list

After selecting the desired VPC choose “Create Flow Log” from the Actions dropdown:

VPC lists

First, flow logs will need permission to access CloudWatch and write to the log group just created. Click the “Set Up Permissions” link:

Create flow log

After clicking the link the IAM console will be displayed in a new window with the necessary permissions and a role name prepopulated. Since the pre-named role exists already in this account it will be renamed:

Flow log request

After clicking “Allow” close out the IAM window and set the role created earlier, along with the log group:

Flow log group

Click on “Create Flow Log” if everything looks okay. Now that the flow log is created it’s time to link it with ElasticSearch.

Subscribe ElasticSearch To The Flow Logs

First head over to the CloudWatch console and click on “Logs”. Then select the log group created previously and click the “Actions” dropdown selecting “Stream to Amazon ElasticSearch Service”:

ElasticSearch flow log

Now in the resulting window, select the ES domain and Lambda IAM Role created previously:

Streaming vpc

Next, select the log format as “Amazon VPC Flow Logs” and accept the defaults:

Configure log format

Next confirm the settings and click “Next” finally after all is complete select “Start Streaming”. CloudWatch will confirm the streaming setup:

CloudWatch

Now the VPC flow logs are delivered every 10 minutes. It’s a good idea to leave for about an hour so that there is a suitable amount of data to work with. After time has passed it’s on to analyze the data.

Data Analysis Setup

Now that a decent amount of time has passed it’s time to look at the data. If the logs were okay the main view for the VPC logs domain should show output similar to the following:

data analysis setup

If nothing shows except the “.kibana” index it’s most likely the following:

  • The ElasticSearch policy doesn’t have the proper permissions
  • The Lambda IAM role doesn’t have the proper permissions
  • VPC logs can’t access CloudWatch
  • The CloudWatch subscriber is not set up

If it’s anything related to the ElasticSearch policies the domain will need to be rebuilt after permissions are adjusted. For the other issues it’s best to check logs to see what happened. Note that since this is streaming 10 minutes or more will need to pass for VPC logs to collect enough data. If everything looks okay go ahead and click on the “Kibana” link to access the interface. Note that an error appears such as:

{“Message”:”User: anonymous is not authorized to perform: es:ESHttpGet on resource: vpc-flow-logs”}

 

Then the ElasticSearch permission policy is not setup properly, the IP address being accessed is not in the policy, or the IP address policy is set to the AWS * principal. Once everything is good the Kibana UI will show similar to the following:

Configure an index pattern

“Index name or pattern” will be set to the non-“.kibana” index that was displayed in the ElasticSearch console. Either the exact value or “index-*” wildcard type declarations can be used. With the exception of the dates in the string, the output should like similar to the following:

Configure 2

Once this is done a number of fields will be displayed:

Fields

Now that the data is available in a UI dashboard. It’s time to analyze the data.

Data Analysis

A quick look at the “Discover” section of the console will show the data that is viewable:

Discover

Here is showing all the VPC traffic. However, what is of concern will be what traffic is being rejected. Fortunately, the log setup has allowed easy access to an “action” keyword. This is either “ACCEPT” or “REJECT”, depending on the action taken against the traffic. Now to narrow it down to rejected traffic enter action: “REJECT” into the search bar:

VPC Traffic

While the search results are now narrowed down, there’s still a lot of noise in the information. This is due to there not being any fields of interest declared. Of concern from the log data are the source address, the source port, the destination address, and the destination port.

Time is already included since it is the primary index. Adding fields can be done by hovering over the field and clicking on “Add” when it appears. This will produce something similar to the following:

Time

Looking over the data there appears to be a number of requests from port 8 to various ephemeral ports. One reason this can occur is that someone is forging a packet with another IP, in this case the IP of a VPC instance, and making a request to an HTTP server with it. For reasons why someone would do this a read up on the Cisco page regarding spoofing is recommended.

Another interesting piece of information is rejected traffic targeted at privileged ports. The filter will be updated to look for such packets that lie on port ranges up to 1024:

Traffic ports

A few ports that are being targeted:

  • 445: SMB
  • 111: SUN Remote Procedure Call
  • 23: telnet
  • 81: Some web servers use this as one of the non-standard listen ports. Usually something in front of it is acting as a proxy for the standard port 80

Now the SSH ports are being rejected as the target system happens to be a Windows instance, so nothing is listening on there. What about the Linux instance that happens to also be in a public subnet? The filters will be adjusted to view only accepted traffic on the privileged ports:

SSH ports

One port is 123, which the server itself is initiating the connection to. This is the NTP port and according to this forum posting the Amazon NTP servers declared in /etc/ntp.conf on the official Amazon Linux AMIs is actually an NTP vendor zone. More important are the accepted traffic to the SSH port. Clicking on the + magnifying glass allows for filtering on field data, in the case destination ports of 22:

destination ports

destination ports 2

Not only that, but there also appears to be a lot of traffic going to RDP:

RDP

Now with data in hand, it’s time to evaluate some methods of securing the systems. The next part of this series will look at some architecture best practice that can help mitigate some of these issues.

For information visit our website.

LEAVE A REPLY

Please enter your comment!
Please enter your name here