Security Tips for Cloud Environment

Security Tips for Cloud Environment

Cloud Security (2020)

Security is a major concern in the cloud space today. A lot of organization have considered moving from on-prem to the cloud but they cannot seem to wrap their heads around the security considerations, the risks involved, and the fear of not having absolute control of their Infrastructure

cloud-security.jpg

Sehrope Sarkuni shared some security tips specifically for the AWS environment from folks at Threat Stack to help you secure your AWS environment as part of their ongoing series on making your AWS security as solid as it can be.

With more companies than ever leveraging cloud services like AWS, and with cloud environments becoming more and more complex, it’s critical that organizations develop proactive, comprehensive security strategies that build security in from the very beginning and evolve as their infrastructures scale to keep systems and data secure.

Securing Your AWS Environment

1. Focus on Control “While the shared responsibility model lays out that providers should focus on the security of the cloud, the reality is that you still need to have the right controls in place. As this TechTarget article puts it, “Controls around logging and identity and access management give customers more granular control and greater insight into workload security.” In other words, while you may trust your cloud provider, access controls are a very good idea because they let you enforce rules and policies that make sense for your unique business. Even if AWS is adhering to industry best practices, there may be areas where it makes sense to tweak the rules to suit your unique situation.

“Even better, the more insight you get via controls, the better off you’ll be when it comes to uncovering and addressing shadow IT (a big security risk) and overall monitoring and management of threats. So if you don’t have proper identity and access management controls in place currently, now is a good time to add this layer of security to your posture.”

— Pete Cheslock, The Real Implications of The Shared Security Model, Threat Stack; Twitter:@threatstack

2. Identify, Define, and Categorize Information Assets “The first step when opting to implement AWS security best practices is to identify all the information assets that you need to protect (application data, users data, code, applications) and then define an efficient and cost-effective approach for securing them from internal and external threats.

“After that, it is recommended to categorize all the information assets into:

Essential information assets, such as business-related information, internal specific processes, and other data from strategic activities. Components/elements that support the essential information assets, such as hardware infrastructure, software packages, personnel roster, and partnerships.” — AWS Security Best Practices, ClickIT; Twitter: @ClickIT_Tech

3. Control Access to AWS IoT Resources Using Your Own Identity and Access Management Solution “AWS IoT is a managed cloud platform that lets connected devices easily and securely interact with cloud applications and other devices by using the Message Queuing Telemetry Transport (MQTT) protocol, HTTP, and the MQTT over the WebSocket protocol. Every connected device must authenticate to AWS IoT, and AWS IoT must authorize all requests to determine if access to the requested operations or resources is allowed. Until now, AWS IoT has supported two kinds of authentication techniques: the Transport Layer Security (TLS) mutual authentication protocol and the AWS Signature Version 4 algorithm. Callers must possess either an X.509 certificate or AWS security credentials to be able to authenticate their calls. The requests are authorized based on the policies attached to the certificate or the AWS security credentials.

“However, many of our customers have their own systems that issue custom authorization tokens to their devices. These systems use different access control mechanisms such as OAuth over JWT or SAML tokens. AWS IoT now supports a custom authorizer to enable you to use custom authorization tokens for access control. You can now use custom tokens to authenticate and authorize HTTPS over the TLS server authentication protocol and MQTT over WebSocket connections to AWS IoT.”

— Ramkishore Bhattacharyya, How to Use Your Own Identity and Access Management Systems to Control Access to AWS IoT Resources, AWS Security Blog; Twitter: @AWSCloud

4. Be Mindful About Where you Store Your Access Keys “Never store your access keys and secret key in ec2 instances or any other cloud storage. If you need to access AWS resources from an ec2 instance, you can always use IAM roles.”

— AWS Security Tips: 16 Things To Do For Securing Your AWS Account, DevOpsCube; Twitter:@devopscube

5. Use AWS Security Services When Integrating Additional Services or Migrating New Workloads Into Your Deployment “When a dev team deploys a workload in AWS, the cloud provider doesn’t protect that application from all external security threats, such as distributed denial-of-service (DDoS) attacks.

“Even when an AWS infrastructure works properly, external attacks can reduce workload performance or render it unavailable. These types of attacks can stop an IT team in its tracks — not to mention cost a fortune in wasted resources.

“This makes it critical to use AWS security services when you integrate additional services or migrate new workloads into your deployment.”

— Stephen Bigelow, Eight tips to roll a service or app into an AWS deployment, SearchAWS; Twitter:@TechTarget, @Stephen_Bigelow

6. Utilize the AWS Identity and Access Management Tool “AWS has Identity and Access Management Tool also known as the AWS IAM to better manage users who can access the resources in the Cloud directly. The tool helps to keep a check on unauthorized access and identity theft (ensures that passwords of the users are changed frequently). Multi-Factor Authentication or MFA, which is one of the features of the Identity and Access Management tool, is an important practice that enhances the security of the data in the cloud. Additionally, Access Management Control, which is yet another added feature of AWS IAM, ensures that EC2 key pairs can have access to resources only through protocols.”

— Meenakshi Vashisht, AWS Security Challenges – Tips for Effective Management, ISHIR; Twitter:@ISHIR

7. Use Multifactor Authentication on Your Root Account “Your root account has access to all AWS resources, that’s how critical it is. Multifactor authentication helps add additional layers of protection to weed out possibilities of unauthorized access. A safe practice is to have a secured and dedicated device to receive one-time passwords, instead of linking it to a mobile phone. This dedicated device must also be placed in a restricted environment, with automated alerts to help you detect attempts of theft. When you use a mobile phone device for the one-time passwords, there’s a tangible risk of device theft contributing to compromising the security of your root account’s access.

“You can take your AWS security a notch higher by setting up multifactor authentication to delete CloudTrail buckets. This ensures that anybody who’s able to access your AWS account is not able to manipulate the CloudTrail logs to hide their activities.”

— Rahul Sharma, AWS security tips: How to lock down and protect your data, TechGenix.com; Twitter: @TechGenix

8. Encrypt Your Amazon Relational Database Services Tackle some of the most common security missteps made when companies make the shift to AWS. This includes encrypting your Amazon relational database services (RDS) if they’re not already encrypted at the storage level; AWS provides RDS encryption to ensure data at rest is not at risk. In many cases, this also fulfills corporate compliance requirements such as those mandated by HIPAA or PCI DSS.

“It’s also a good idea to rotate IAM keys for users every three months to ensure old keys aren’t being used to access high-level services. Finally, opt for written access policies over S3 bucket permissions; the List access function, for example, can cause cost spikes if users who don’t need the function are listing objects at high frequency.”

— Favian Raygoza, 5 AWS Security Best Practices to Implement Now, ThinkIT by Single Hop; Twitter: @SingleHop

9. Name Your EC2 Instances Logically “Naming (tagging) your EC2 instances logically and consistently has several advantages, such as providing additional information about the instance location and usage, promoting consistency within the selected environment, distinguishing fast similar resources from one another, improving clarity in cases of potential ambiguity, and classifying them accurately as compute resources for easy management and billing purposes.”

— EC2 Instance Naming Conventions, Cloud Conformity; Twitter: @cloudconformity

10. Tagging Can Help Manage Resources at Scale “Tagging is an effective tool to help manage AWS resources at increasing scale, providing the ability to identify, classify and locate resources for management and billing purposes.

“Amazon EC2 filtering provides a way to both locate tagged resources and validate that the tagging standards in your organization are being properly implemented. Naming best practices can be leveraged to achieve consistency across your environment and maximize the benefits that tagging has to offer.”

— AWS Naming Convention Best Practices Tagging, Myrtec; Twitter: @myrtec

11. Use Automated Tools to Help Manage Resource Tags “Implement automated tools to help manage resource tags. The Resource Groups Tagging API enables programmatic control of tags, making it easier to automatically manage, search, and filter tags and resources. It also simplifies backups of tag data across all supported services with a single API call per AWS Region.”

— AWS Tagging Strategies, AWS; Twitter: @awscloud

12. Your Naming Security and Development Convention Should be Easily Understood Across Your Dev and Infrastructure Teams “The problem in organizing each resource and how it can relate to other AWS resource is to make sure that each resource is able to be reused by the next AWS Engineer in your team. A good approach in doing so is creating a naming security and deployment convention that can easily be understood across the development and infrastructure teams.”

— Kenichi Shibata, AWS Primer on Best Practice in Resource Tagging, KenichiShibata.net

13. Use MFA for Bucket Deletion, and Restrict Access to CloudTrail Bucket Logs “Unrestricted access, even to administrators, increases the risk of unauthorized access in case of stolen credentials due to a phishing attack. If the AWS account becomes compromised, multifactor authentication will make it more difficult for hackers to hide their trail.”

— How to Secure Your Information on AWS: 10 Best Practices, Tripwire; Twitter: @TripwireInc

14. Keep Instances off When They’re Not in Use “Scheduling your instances to be turned off on nights and weekends when you aren’t using them saves you a ton of money on your cloud bill, but also provides security and protection. Leaving servers and databases on 24/7 is just asking for someone to try to break in and connect to servers within your infrastructure, especially during off-hours when you don’t have as many IT staff keeping an eye on things. By aggressively scheduling your resources to be off as much as possible, you minimize the opportunity for outside attacks on those servers.”

— Chris Parlette, 7 AWS Security Best Practices with ParkMyCloud, ParkMyCloud; Twitter:@ParkMyCloud

15. Grants Are a More Flexible Way to Control Access to CMKs in KMS. “Key Policies are the primary way to control access to customer master keys (CMKs) in KMS. On top of that, you can use IAM policies to authorize. The second way to control access are Grants. With a Grant, you can allow another AWS principal (e.g. an AWS account) to use a CMK with some restrictions. You could also implement this with the Key policy, but grants are more flexible to control.”

— Michael Wittig, AWS Security Primer, Cloudonaut; Twitter: @hellomichibye

  1. Limit Access to S3 Buckets to Trusted Administrators “Data stored in S3 buckets is secure by default. Through Identity and Access Management (IAM) policies, bucket policies, and Access Control Lists, users can control exactly who can access S3 buckets. Authenticating identity and restricting access may seem like common sense best practices, however, these actions are often overlooked. You should limit S3 bucket access to trusted administrators and audit their permissions frequently. Know who your vendors are and thoroughly examine their permissions as well. Companies will frequently allow vendors access to vulnerable areas of their network.”

— Ryan O’Donnell, 5 Ways to Avoid Cloud Creepers, Relus Technologies; Twitter: @RelusTech

17. Allow Instances to Communicate Only for the TCP/UDP Ports Required “EC2 instances are going to communicate with each other but there should be communication for only those TCP/UDP ports required. Therefore, it’s recommended to configure Security Groups as virtual firewalls to allow and deny traffic to or from instances. This is the best way to protect instances, or group of instances because instances which are in a group are not going to communicate to instances of another group unless we allow it explicitly. As you can see, it’s no longer enough a network perimeter firewall to allow and deny traffic between networks but we are increasingly demanding firewalls to protect virtual machines from virtual machines even when they are in the same subnet.”

— David Romero Trejo, AWS Security Best Practices, David Romero Trejo

18. Set Alarms on Billing to Aid in DDoS Attack Detection “Set alarms on billing using Amazon CloudWatch. This practice can be very useful to detect DDoS attacks and high data transfer occurrences. For steps to set alerts on billing, click here.”

— Ankit Giri, AWS Security practices demystified, To The New; Twitter: @TOTHENEW **

  1. Use Security Groups** “AWS Security Groups act as a virtual firewall, allowing you to control inbound and outbound traffic. Use AWS Security Groups to limit access to administrative services (SSH, RDP, etc.) as well as databases.

“In addition, try to restrict access and allow only certain network ranges when possible. It is also important to monitor and delete security groups that are not being used and to audit them periodically.”

— Nick Ismail, 10 tips for securing AWS public cloud environments, Information Age; Twitter:@InformationAge **

  1. Place Virtual Firewalls on Every Virtual Network Created** “Instead of just having a firewall at the edge of the infrastructure, place virtual firewalls (available in the AWS Marketplace) on each virtual network that is created.”

— Brandon Butler, 5 Amazon Web Services security tips for businesses | How to secure Amazon Web Services: AWS security tips, Network World; Twitter: @computerworlduk

21. Assign IAM Roles to EC2 Instances “IAM roles can be used to define permission levels for different resources and applications that run on EC2 instances. When you launch an EC2 instance, you can assign an IAM role to it, eliminating the need for your applications to use AWS credentials to make API requests. This is one of the best tools when it comes to security in AWS. First of all, IAM roles can be very granular; you can control access at a resource level and for actions that can be performed. And when using IAM roles, if your EC2 instance gets compromised, you do not need to revoke credentials.”

— Useful Tips to Secure AWS, Rite Tech Services; Twitter: @adjodha **

  1. Not Everyone Needs to be an Admin** “Access keys and user access control are integral to AWS security. It may be tempting to give developers administrator rights to handle certain tasks, but you shouldn’t. Not everyone needs to be an admin, and there’s no reason why policies can’t handle most situations. Saviynt’s research found that 35 percent of privileged users in AWS have full access to a wide variety of services, including the ability to bring down the whole customer AWS environment. Another common mistake is leaving high-privilege AWS accounts turned on for terminated users, Saviynt found.

“Administrators often fail to set up thorough policies for a variety of user scenarios, instead choosing to make them so broad that they lose their effectiveness. Applying policies and roles to restrict access reduces your attack surface, as it eliminates the possibility of the entire AWS environment being compromised because a key was exposed, account credentials were stolen, or someone on your team made a configuration error.”

— Fahmida Y. Rashid, 10 AWS security blunders and how to avoid them, InfoWorld; Twitter:@infoworld

23. Set MFA on Your Root Account With a Hard Token, Rather Than a Soft Token “Activate MFA on your root account. This should be set with a hard token, rather than a soft token. Once set the hard token should be stored in a secure place, like buried in the backyard or an office safe. I’ve seen teams set the MFA with a soft token, then store the token in a password manager next to the password. While this is convenient, storing both the password and second factor together is not a great strategy, e.g. if your password safe is comprised, the second factor is not effective.

“It’s also good practice to have a few extra hard tokens in the office in case one breaks or you need to create a new account.”

— A best practice guide to getting your Enterprise AWS Account Setup, Stax.io; Twitter: @staxapp

24. Avoid Using Overlapping CIDRs “The first best practice is to organize your AWS environment. We recommend that you use tags. As you continue to add instances, create route tables and subnets, it’s nice to know what connects with what. And the simple use of tags will make life so much easier when it comes to troubleshooting. Make sure you plan your CIDR block very carefully. We would suggest that you go a little bit bigger than you think you need and not smaller.

“Remember that for every subnet that you create, AWS takes five of those IP addresses for a subnet. So when you create a subnet know that off the top there’s a five IP overhead. Avoid using overlapping CIDR blocks, and the reason being that at some point, you may not want to do it today but you may want to do it down the road, you may want to pair this VPC with another VPC, and if you have overlapping CIDR blocks, the pairing of the VPC will not function correctly and you’re going to find yourself in a world of configuring nightmare in order to be able to get those VPCs to pair.

“Try to avoid using overlapping CIDRs, and always save a little bit of space for future expansion. There’s no cost associated here with using a bigger CIDR block, so don’t undersize what you think you may need from an IP’s perspective just to try to make it clean and easy.”

— Taran Soodan, Best Practices Learned from 1,000 AWS VPC Configurations, SoftNAS; Twitter:@SoftNAS

25. If Your Server Infrastructure Uses More Than One Server, You Should Be Using a VPC “Amazon Virtual Private Cloud (VPC) is a networking feature of EC2 that allows you to define a private network for a group of servers. Using it greatly simplifies fencing off components of your infrastructure and minimizing the externally facing pieces.

“The basic idea is to separate your infrastructure into two halves, a public half and a private half. The external endpoints for whatever you are creating goes in the public half. For a web application, this would be your web server or load balancer.

“Services that are only consumed internally, such as databases or caching servers, belong in the private half. Components in the private half are not directly accessible from the public internet.

“This is a form of the principle of least privilege and it’s a good idea to implement it. If your server infrastructure involves more than one server, then you probably should be using a VPC.”

For more Information, Shoot me an email at