Cloud Security Best Practices: Securing AWS, Azure, and GCP in 2024
Cloud security breaches continue to make headlines. From exposed S3 buckets leaking millions of records to compromised IAM credentials leading to cryptomining attacks, the attack surface in cloud environments is vast. This guide covers essential security practices across AWS, Azure, and GCP.
The Shared Responsibility Model
Before diving into specifics, understand who secures what:
| Layer | Customer Responsibility | Provider Responsibility |
|---|---|---|
| Data | ✓ Classification, encryption, access | |
| Applications | ✓ Code security, patching | |
| Identity | ✓ IAM, MFA, access policies | |
| Network | ✓ Firewall rules, segmentation | ✓ Physical network |
| Compute | ✓ OS patching, hardening | ✓ Hypervisor |
| Storage | ✓ Encryption, access controls | ✓ Physical storage |
| Physical | ✓ Data center security |
Identity and Access Management (IAM)
Principle of Least Privilege
AWS - Restrictive IAM Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowS3ReadSpecificBucket",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::company-data-bucket",
"arn:aws:s3:::company-data-bucket/*"
],
"Condition": {
"IpAddress": {
"aws:SourceIp": "10.0.0.0/8"
},
"Bool": {
"aws:SecureTransport": "true"
}
}
}
]
}
Azure - Role Assignment with Conditions:
{
"properties": {
"roleDefinitionId": "/subscriptions/{sub}/providers/Microsoft.Authorization/roleDefinitions/{role}",
"principalId": "{user-principal-id}",
"condition": "(
@Resource[Microsoft.Storage/storageAccounts/blobServices/containers:name] StringEquals 'allowed-container'
)",
"conditionVersion": "2.0"
}
}
GCP - Custom IAM Role:
title: "Limited Storage Reader"
description: "Read-only access to specific bucket"
stage: "GA"
includedPermissions:
- storage.buckets.get
- storage.objects.get
- storage.objects.list
# Bind to specific resource
# gcloud projects add-iam-policy-binding PROJECT_ID \
# --member="serviceAccount:[email protected]" \
# --role="projects/PROJECT_ID/roles/limitedStorageReader" \
# --condition='expression=resource.name.startsWith("projects/_/buckets/allowed-bucket")'
Multi-Factor Authentication (MFA)
# AWS - Enforce MFA for sensitive operations
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "DenyAllExceptListedIfNoMFA",
"Effect": "Deny",
"NotAction": [
"iam:CreateVirtualMFADevice",
"iam:EnableMFADevice",
"iam:GetUser",
"iam:ListMFADevices",
"iam:ListVirtualMFADevices",
"iam:ResyncMFADevice",
"sts:GetSessionToken"
],
"Resource": "*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "false"
}
}
}
]
}
Service Account Security
# GCP - Create workload identity for GKE
gcloud iam service-accounts create gke-workload \
--display-name="GKE Workload Identity"
gcloud iam service-accounts add-iam-policy-binding \
[email protected] \
--role="roles/iam.workloadIdentityUser" \
--member="serviceAccount:PROJECT.svc.id.goog[NAMESPACE/KSA_NAME]"
# Kubernetes ServiceAccount annotation
kubectl annotate serviceaccount KSA_NAME \
--namespace NAMESPACE \
iam.gke.io/gcp-service-account=[email protected]
Network Security
Virtual Private Cloud (VPC) Architecture
┌─────────────────────────────────────────────────────────────┐
│ VPC (10.0.0.0/16) │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Public Subnet (10.0.1.0/24) │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ NAT GW │ │ ALB │ │ Bastion │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Private Subnet (10.0.2.0/24) │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ App Server │ │ App Server │ │ App Server │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Data Subnet (10.0.3.0/24) │ │
│ │ ┌─────────────┐ ┌─────────────┐ │ │
│ │ │ RDS │ │ ElastiCache│ │ │
│ │ └─────────────┘ └─────────────┘ │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
Security Groups / NSGs
AWS Security Group - Web Tier:
resource "aws_security_group" "web" {
name = "web-tier-sg"
description = "Security group for web servers"
vpc_id = aws_vpc.main.id
# Allow HTTPS from ALB only
ingress {
from_port = 443
to_port = 443
protocol = "tcp"
security_groups = [aws_security_group.alb.id]
}
# Allow health checks
ingress {
from_port = 8080
to_port = 8080
protocol = "tcp"
security_groups = [aws_security_group.alb.id]
}
# Restrict egress
egress {
from_port = 443
to_port = 443
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 5432
to_port = 5432
protocol = "tcp"
security_groups = [aws_security_group.database.id]
}
tags = {
Name = "web-tier-sg"
}
}
Azure Network Security Group:
{
"name": "web-nsg",
"properties": {
"securityRules": [
{
"name": "AllowHTTPS",
"properties": {
"priority": 100,
"direction": "Inbound",
"access": "Allow",
"protocol": "Tcp",
"sourceAddressPrefix": "AzureLoadBalancer",
"sourcePortRange": "*",
"destinationAddressPrefix": "VirtualNetwork",
"destinationPortRange": "443"
}
},
{
"name": "DenyAllInbound",
"properties": {
"priority": 4096,
"direction": "Inbound",
"access": "Deny",
"protocol": "*",
"sourceAddressPrefix": "*",
"sourcePortRange": "*",
"destinationAddressPrefix": "*",
"destinationPortRange": "*"
}
}
]
}
}
VPC Flow Logs
# AWS - Enable VPC Flow Logs
aws ec2 create-flow-logs \
--resource-type VPC \
--resource-ids vpc-1234567890 \
--traffic-type ALL \
--log-destination-type cloud-watch-logs \
--log-group-name /aws/vpc/flow-logs \
--deliver-logs-permission-arn arn:aws:iam::ACCOUNT:role/flowlogsRole
# GCP - Enable VPC Flow Logs
gcloud compute networks subnets update SUBNET_NAME \
--region=REGION \
--enable-flow-logs \
--logging-aggregation-interval=interval-5-sec \
--logging-flow-sampling=0.5 \
--logging-metadata=include-all
Data Protection
Encryption at Rest
AWS KMS with Customer Managed Keys:
import boto3
kms = boto3.client('kms')
# Create customer managed key
response = kms.create_key(
Description='Production database encryption key',
KeyUsage='ENCRYPT_DECRYPT',
KeySpec='SYMMETRIC_DEFAULT',
MultiRegion=False,
Tags=[
{'TagKey': 'Environment', 'TagValue': 'Production'},
{'TagKey': 'Purpose', 'TagValue': 'DatabaseEncryption'}
]
)
key_id = response['KeyMetadata']['KeyId']
# Set key policy
key_policy = {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Enable IAM policies",
"Effect": "Allow",
"Principal": {"AWS": f"arn:aws:iam::{account_id}:root"},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Allow RDS to use the key",
"Effect": "Allow",
"Principal": {"Service": "rds.amazonaws.com"},
"Action": ["kms:Encrypt", "kms:Decrypt", "kms:GenerateDataKey*"],
"Resource": "*",
"Condition": {
"StringEquals": {
"kms:CallerAccount": account_id,
"kms:ViaService": f"rds.{region}.amazonaws.com"
}
}
}
]
}
kms.put_key_policy(KeyId=key_id, PolicyName='default', Policy=json.dumps(key_policy))
S3 Bucket Security
resource "aws_s3_bucket" "secure_bucket" {
bucket = "company-secure-data"
}
# Block all public access
resource "aws_s3_bucket_public_access_block" "secure_bucket" {
bucket = aws_s3_bucket.secure_bucket.id
block_public_acls = true
block_public_policy = true
ignore_public_acls = true
restrict_public_buckets = true
}
# Enable versioning
resource "aws_s3_bucket_versioning" "secure_bucket" {
bucket = aws_s3_bucket.secure_bucket.id
versioning_configuration {
status = "Enabled"
}
}
# Server-side encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "secure_bucket" {
bucket = aws_s3_bucket.secure_bucket.id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.s3_key.arn
sse_algorithm = "aws:kms"
}
bucket_key_enabled = true
}
}
# Bucket policy - enforce encryption and HTTPS
resource "aws_s3_bucket_policy" "secure_bucket" {
bucket = aws_s3_bucket.secure_bucket.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "DenyUnencryptedUploads"
Effect = "Deny"
Principal = "*"
Action = "s3:PutObject"
Resource = "${aws_s3_bucket.secure_bucket.arn}/*"
Condition = {
StringNotEquals = {
"s3:x-amz-server-side-encryption" = "aws:kms"
}
}
},
{
Sid = "DenyInsecureTransport"
Effect = "Deny"
Principal = "*"
Action = "s3:*"
Resource = [
aws_s3_bucket.secure_bucket.arn,
"${aws_s3_bucket.secure_bucket.arn}/*"
]
Condition = {
Bool = {
"aws:SecureTransport" = "false"
}
}
}
]
})
}
Secrets Management
# AWS Secrets Manager
import boto3
secrets_manager = boto3.client('secretsmanager')
# Store secret with automatic rotation
response = secrets_manager.create_secret(
Name='production/database/credentials',
SecretString=json.dumps({
'username': 'admin',
'password': 'initial-password',
'host': 'db.example.com',
'port': 5432
}),
Tags=[
{'Key': 'Environment', 'Value': 'Production'}
]
)
# Enable rotation
secrets_manager.rotate_secret(
SecretId='production/database/credentials',
RotationLambdaARN='arn:aws:lambda:region:account:function:SecretsRotation',
RotationRules={
'AutomaticallyAfterDays': 30
}
)
Container Security
Kubernetes Security Policies
# Pod Security Standards - Restricted
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
pod-security.kubernetes.io/enforce: restricted
pod-security.kubernetes.io/audit: restricted
pod-security.kubernetes.io/warn: restricted
---
# Network Policy - Deny all by default
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# Allow specific traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-web-to-api
namespace: production
spec:
podSelector:
matchLabels:
app: api
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app: web
ports:
- protocol: TCP
port: 8080
Container Image Scanning
# GitHub Actions - Trivy Scanner
name: Container Security Scan
on: push
jobs:
scan:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build image
run: docker build -t myapp:${{ github.sha }} .
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
image-ref: 'myapp:${{ github.sha }}'
format: 'sarif'
output: 'trivy-results.sarif'
severity: 'CRITICAL,HIGH'
exit-code: '1'
- name: Upload scan results
uses: github/codeql-action/upload-sarif@v2
with:
sarif_file: 'trivy-results.sarif'
Logging and Monitoring
Centralized Logging Architecture
# AWS CloudWatch Agent Configuration
{
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/var/log/secure",
"log_group_name": "/aws/ec2/security",
"log_stream_name": "{instance_id}/secure",
"retention_in_days": 365
},
{
"file_path": "/var/log/audit/audit.log",
"log_group_name": "/aws/ec2/audit",
"log_stream_name": "{instance_id}/audit",
"retention_in_days": 365
}
]
}
}
},
"metrics": {
"metrics_collected": {
"cpu": {
"measurement": ["cpu_usage_active"],
"metrics_collection_interval": 60
},
"disk": {
"measurement": ["used_percent"],
"metrics_collection_interval": 60,
"resources": ["/"]
}
}
}
}
Security Alerting
# CloudWatch Alarm for suspicious activity
import boto3
cloudwatch = boto3.client('cloudwatch')
# Alert on root account usage
cloudwatch.put_metric_alarm(
AlarmName='RootAccountUsage',
ComparisonOperator='GreaterThanThreshold',
EvaluationPeriods=1,
MetricName='RootAccountUsageCount',
Namespace='CloudTrailMetrics',
Period=300,
Statistic='Sum',
Threshold=0,
ActionsEnabled=True,
AlarmActions=['arn:aws:sns:region:account:security-alerts'],
AlarmDescription='Alert when root account is used',
TreatMissingData='notBreaching'
)
# CloudWatch Logs Metric Filter
logs = boto3.client('logs')
logs.put_metric_filter(
logGroupName='/aws/cloudtrail/logs',
filterName='RootAccountUsage',
filterPattern='{ $.userIdentity.type = "Root" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != "AwsServiceEvent" }',
metricTransformations=[{
'metricName': 'RootAccountUsageCount',
'metricNamespace': 'CloudTrailMetrics',
'metricValue': '1'
}]
)
Compliance and Governance
AWS Config Rules
# config-rules.yaml
ConfigRules:
- ConfigRuleName: s3-bucket-public-read-prohibited
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_PUBLIC_READ_PROHIBITED
- ConfigRuleName: encrypted-volumes
Source:
Owner: AWS
SourceIdentifier: ENCRYPTED_VOLUMES
- ConfigRuleName: iam-password-policy
Source:
Owner: AWS
SourceIdentifier: IAM_PASSWORD_POLICY
InputParameters:
RequireUppercaseCharacters: "true"
RequireLowercaseCharacters: "true"
RequireSymbols: "true"
RequireNumbers: "true"
MinimumPasswordLength: "14"
PasswordReusePrevention: "24"
MaxPasswordAge: "90"
- ConfigRuleName: multi-region-cloudtrail-enabled
Source:
Owner: AWS
SourceIdentifier: MULTI_REGION_CLOUD_TRAIL_ENABLED
- ConfigRuleName: vpc-flow-logs-enabled
Source:
Owner: AWS
SourceIdentifier: VPC_FLOW_LOGS_ENABLED
Automated Remediation
# Lambda function for auto-remediation
import boto3
def lambda_handler(event, context):
"""Auto-remediate public S3 buckets"""
s3 = boto3.client('s3')
# Get bucket name from Config event
bucket_name = event['detail']['resourceId']
# Block public access
s3.put_public_access_block(
Bucket=bucket_name,
PublicAccessBlockConfiguration={
'BlockPublicAcls': True,
'IgnorePublicAcls': True,
'BlockPublicPolicy': True,
'RestrictPublicBuckets': True
}
)
# Notify security team
sns = boto3.client('sns')
sns.publish(
TopicArn='arn:aws:sns:region:account:security-notifications',
Subject=f'Auto-remediated public S3 bucket: {bucket_name}',
Message=f'Public access has been blocked for bucket {bucket_name}'
)
return {'statusCode': 200, 'body': 'Remediation complete'}
Cloud Security Checklist
Identity & Access
- MFA enabled for all users
- No root/admin credentials in use
- Service accounts with minimal permissions
- Regular access reviews conducted
- Unused credentials rotated/removed
Network
- VPCs properly segmented
- Security groups follow least privilege
- Flow logs enabled
- WAF configured for web apps
- DDoS protection enabled
Data
- Encryption at rest enabled
- Encryption in transit enforced
- Backup and recovery tested
- Data classification implemented
- Secrets in secrets manager
Compute
- Images scanned for vulnerabilities
- Patches applied regularly
- Containers run as non-root
- Instance metadata v2 enforced
- SSM Session Manager for access
Monitoring
- CloudTrail/Activity Logs enabled
- Security alerts configured
- Log retention policies set
- SIEM integration active
- Incident response runbooks ready
How AIPTx Secures Your Cloud
AIPTx provides comprehensive cloud security assessment:
- Multi-Cloud Scanning: AWS, Azure, GCP configuration analysis
- IAM Analysis: Identify over-privileged accounts and roles
- Network Assessment: Exposed services and misconfigurations
- Compliance Mapping: CIS Benchmarks, SOC 2, HIPAA, PCI-DSS
- Continuous Monitoring: Real-time drift detection
Secure your cloud infrastructure - Start Assessment

