AWS Cloud Practitioner Cheat Sheet: As we all now AWS certification are in high demand in market and gaining knowledge and testing it has become utmost important. So lets start our AWS certification journey with AWS Cloud Practitioner Certified CLF-C01 certification which is beginner level certification.
I applied for this certification just to test my knowledge about AWS, your reason can be different. This certificate gives you a lot of information in basic level which is foundation for associate level and professional level certifications.
AWS Cloud Practitioner Cheat Sheet
Please find below AWS Cloud Practitioner Cheat Sheet used by me during my stint:
Machine Learning services:
- Amazon Rekognition: Helps to recognize objects, people, text, scenes in images and videos using ML. Facial analysis and facial search to do user verification, people counting.
- Amazon Transcribe: Automatically converts speech-to-text service that offers high-quality and affordable transcriptions. Uses a deep learning process called automatic speech recognition (ASR) to convert speech to text quickly and accurately. (ex: subtitles)
- Amazon Polly: text-to-speech service allows you to create audio versions of your notes.
- Amazon Translate: It is a very similar tool to Google Translate which allows you to translate text in one language to another.
- Amazon Lex: Automatic Speech Recognition (ASR) to convert speech to text. It helps you to build chatbot quickly. It is same technology that powers Alexa.
- Amazon Connect: Receive calls, create contact flows, cloud-based virtual contact center. No upfront payments, 80% cheaper than traditional contact center solutions.
- Amazon Comprehend: Fully managed and serverless service for Natural Language Processing– NLP.
- Amazon SageMaker: Fully managed service for developers / data scientists to build ML models. Sagemaker allows you to build, train, and deploy machine learning models at any scale.
- Amazon Forecast: Fully managed service that uses ML to deliver highly accurate forecasts. Use cases: Product Demand Planning, Financial Planning, Resource Planning
- Amazon Kendra: Fully managed document search service powered by Machine Learning. Extract answers from within a document, Natural language search capabilities. Learn from user interactions/feedback to promote preferred results (Incremental Learning). (ML-powered search engine).
- Amazon Personalize: Fully managed ML-service to build apps with real-time personalized recommendations. Used by amazon.in to provide personalize recommendations.
- Amazon Textract: Automatically extracts text, handwriting, and data from any scanned documents using AI and ML.(detect text and data in documents)
- Amazon Workspace: Managed Desktop as a Service (Daas) solution to easily provision Windows and Linux desktops. Great to eliminate management of on-premise VDI (virtual desktop infrastructure), pay as you go on monthly or hourly rates. Minimizing latency – 2 uses in multiple regions then deploy the workspace near to user1 and another workspace to near user2.
- Appstream 2.0: Desktop Application Streaming Service. The application is delivered from within web browser. Deliver to any computer without acquiring and provisioning infrastructure.
- Sumerian: Use to created virtual reality (AR), augumented reality (AR) and 3D applications. can be used to quickly create 3D module with animations. Accessible via web browser or on popular hardware of AR and VR.
- AWS IoT Core: allows you to easily connect IoT devices to AWS cloud.
- AWS Device Farm: Fully managed service that tests your web and mobile apps against desktop browsers, real mobile devices, and tablets(real devices – not emulators)
- Elastic Transcoder: Used to convert media files stored in S3 into media files in the format required by consumer playback devices such as phone.
- AWS Backup: Fully managed service to centrally manage and automate backups across AWS services. Supports point-in-time-recovery (PITR), Cross region backup and cross account backups. On demand and scheduled backups.
- Disaster Recovery Strategies: AWS Elastic disaster Recovery Service is a service that helps you to do better disaster recovery. Formerly known as CloudEndure Disaster Recovery
- Backup and Restore(cheapest):
- Pilot Light:Core functions of the app, ready to scale but minimal setup only. Little expensive than Backup and Restore.
- Warm Standby:More expensive. Full version of the app ready in the cloud but at minimum size.
- Multi-site/Hot-site (most expensive):Full version of the app ready in the cloud with full size.
- AWS DataSync: is used to move large amount of data from on-premise to AWS. Can synchronize to S3, EFS or AWS FSx for windows. Replication tasks can be scheduled hourly, daily or weekly. Replications tasks are incremental after first full load.
- AWS Fault Injection Simulator (FIS):A fully managed service for running fault injection experiment on AWS workloads. Based on Chaos Engineering
- AWS SQS (Simple Queue Service):Fully managed service used to decouple the application. Default retention of messages is 4 days maximum up to 14 days.
- AWS SNS (Simple Notification Service):pub-sub module with topic features.
- Amazon Kinesis:real-time big data streaming. Managed service to collect, process and analyze the real-time streaming at any scale.
Kinesis data stream–> low latency streaming to ingest data at high scale from hundreds of thousands of sources.
Kinesis Data firehose–>load streams into S3, Redshift, elastic search etc.
Kinesis Data Analytics –> performs real-time analytics on streams using SQL.
Kinesis Video Streams –> monitor real-time video stream for analytics or ML.
- Amazon MQ: Can be used only if company is migrating to cloud and it used protocol like MQTT, AMQP, STOMPS, Openwire, WSS etc. Amazon MQ does not scale like SQS and SNS and runs of dedicated machine. Amazon MQ has both queue feature (like SQS) and topic features (like SNS)
- AWS STS (Security Token Service):Center of AWS. Enables you to create temporary, limited-privilege credentialsto access your AWS services.
- AWS Cognito: Identity for your web and mobile application users. Example: Login with google, facebook or twitter which will redirect to main application.
- Single Sign-On(SSO):Centrally managed Single Sign-On to access multiple accounts and 3rd party business applications.Integrated with AWS organizations and on-premise Active Directory. Supports SAML2.0
- Private Cloud–> used by a single organization, not exposed to the public.
- PublicCloud–> like AWS, GCP, Microsoft Azure
- Hybrid Cloud–> on-premise + public cloud offerings
AWS GovCloud is an isolated data center region of the Amazon Web Services (AWS) cloud designed to meet strict compliance requirements as defined by the U.S. Government.
Five Characteristic of Cloud Computing:
- On-demand self service:User can provision resources and use them without human interaction from service provider.
- Broad network access:Resources available over the network
- Multi-tenancy and resource pooling:Multiple customer can share the same infrastructure and applications with security and privacy.
- Rapid Elasticity and scalability:quickly and easily scale based on demand.
- Measured service:Usage is measured, user pay correctly for what they have used.
Six Advantages of Cloud Computing:
- Trade capital expense (CAPEX) for operational expense (OPEX)
- Pay On-Demand: don’t own hardware
- Reduced Total Cost of Ownership (TCO) & Operational Expense (OPEX)
- Benefit from massive economies of scale:
- Prices are reduced as AWS is more efficient due to large scale
If more people started using aws then prices will be reduced by AWS.
3.Stop guessing capacity
- Scale based on actual measured usage
4.Increase speed and agility
- Stop spending money running and maintaining data centers
- Go global in minutes: leverage the AWS global infrastructure
Cloud Computing Types:
- Infrastructure as a Service (Iaas):Provide building blocks for cloud IT. • Provides networking, computers, data storage space. Example: Amazon EC2
- Platform as a Service(Paas):Removes the need for your organization to manage the underlying infrastructure. Focus on deployment and management of applications. Example: Elastic Beanstalk
- Software as a Service(Saas):Completed product that is run and managed by service provider. Example: Rekognition for ML and gmail, dropbox etc.
** Desktop as a Service (Daas): Provides desktops as a service example is Amazon workspaces.
3 Pricing of the Cloud:
- Compute: Pay for compute time
- Storage: Pay for data stored in the Cloud
- Data transfer OUT of the Cloud: Data transfer IN is free
AWS IAM (identity Access Management):
Global Service, follow Least privilege principle. Users or groups can be assigned JSON documents called policies which defines permissions.
Group contains only user not the other groups.
MFA (Multi-Factor Authentication):
- Virtual MFA Device:Google Authenticator, Authy
- Universal 2nd Factor (U2F) Security Key –> Physical Device, YubiKey by Yubico
- Hardware Key Fob MFA device: provided by Gemalto. Physical Device
**Hardware key Fob MFA device for AWS GovCloud(US) –>provided by SurePassID.
AWS CloudShell is a browser-based shell that makes it easy to securely manage, explore, and interact with your AWS resources.
IAM Security Tools:
1)IAM Credential Reports (Account Level): a report that lists all your account’s users and the status of their various credentials.
2)IAM Access Advisor (User Level): Access advisor shows the service permissions granted to a user and when those services were last accessed.
Security Groups: Firewall on our EC2 instances and contains only allow group. Security groups are locked down to region or VPC.
EC2 User Data: The EC2 User Data Script runs with the root user to bootstrap our instances. bootstrapping means launching commands when a machine starts. That script is only run once at the instance first start.
EC2 Connect: Used to connect to your EC2 instance with browser. Port 22 needs to be open to access EC2 instance via EC2 connect.
***Never store API keys on EC2 instance. if credentials needed to provided, use IAM roles. ***
EC2 Purchasing Options: analogy of staying in hotel room
1)On-Demand Instances – short workload, predictable pricing, pay by second. It has the highest cost but no upfront payment.
2)Reserved (1 & 3 years): Up to 72% discount compared to On-demand. You reserve a specific instance attributes (Instance Type, Region, Tenancy, OS)
Reservation Period – 1 year (+discount) or 3 years (+++discount). Payment Options – No Upfront (+), Partial Upfront (++), All Upfront (+++).
Reserved Instance’s Scope – Regional or Zonal. You can buy and sell in the Reserved Instance Marketplace
- Reserved Instances – long workloads
- Convertible Reserved Instances – long workloads with flexible instances.
3)Savings Plans (1 & 3 years) –commitment to an amount of usage, long workload. Commit to a certain type of usage ($10/hour for 1 or 3 years). Usage beyond EC2 Savings Plans is billed at the On-Demand price.
Locked to a specific instance family & AWS region. Flexible across: Instance Size (e.g., m5.xlarge, m5.2xlarge), OS (e.g., Linux, Windows), Tenancy (Host, Dedicated, Default)
4)Spot Instances – short workloads, cheap, can lose instances (less reliable). Upto 90% discount. The MOST cost-efficient instances in AWS. Not suitable for critical jobs or databases,
5)Dedicated Hosts – book an entire physical server, control instance placement
6)Dedicated Instances – no other customers will share your hardware. The most expensive option, allows you address compliance requirements and use your existing server bound software licenses. Useful for software that have complicated licensing model or for companies that have strong regulatory or compliance needs.
Purchasing Options: On-demand – pay per second for active Dedicated Host and Reserved – 1 or 3 years (No Upfront, Partial Upfront, All Upfront)
7)Capacity Reservations – reserve capacity in a specific AZ for any duration. Combine with Regional Reserved Instances and Savings Plans to benefit from billing discounts.
You’re charged at On-Demand rate whether you run instances or not. Suitable for short-term, uninterrupted workloads that needs to be in a specific AZ.
Elastic Boot Store Volume (EBS Volume): a network drive you can attach to your instances while they run. It allows your instances to persist data, even after their termination.
They can only be mounted to one instance at a time (at the CCP level).
EBS – Delete on Termination attribute: if enabled, default EBS volume is lost and any other attached EBS volume is not deleted.
EBS Snapshot: Backup of EBS. Features are:
- EBS Snapshot Archive: Move snapshot to “archive tier” that is cheaper.
- Recycle Bin:Protect your Amazon EBS Snapshots and Amazon Machine Images (AMIs) from accidental deletion. Use Recycle Bin to protect your business-critical EBS Snapshots and AMIs from accidental deletion. With Recycle Bin, you specify a configurable retention period within which you can recover these resources after they have been deleted.
Snapshots in the Recycle Bin incur the same charges as regular EBS Snapshots.
EC2 Image Builder: automatically build, test and distribute AMIs. Free service (only pay for the underlying resources)
EC2 Instance Store: If you need a high-performance hardware disk, use EC2 Instance Store. EC2 Instance Store has Better I/O performance and lose their storage if they’re stopped (ephemeral). Good for buffer / cache / scratch data / temporary content.
Elastic File System (EFS): Managed NFS (network file system) that can be mounted on 100s of EC2. EFS works with Linux EC2 instances in multi-AZ.
EFS is highly available, scalable, expensive (3x gp2), pay per use, no capacity planning.
Storage class in EFS is cost-optimized way for files not accessed every day.
- EFS Standard:standard EFS storage class.
- EFS Infrequent Access (EFS-IA):EFS will automatically move your files to EFS-IA based on the last time they were accessed.
Amazon FSx: Fully managed service, Launch 3rd party high-performance file systems on AWS
- Amazon FSx:For windows. Supports SMB protocol & Windows NTFS
- Amazon Lusture:Linux + Cluster. For Linux only. A fully managed, high-performance, scalable file storage for High Performance Computing (HPC)
- FSx for NetApp ONTAP
Elastic Load Balancer: An ELB (Elastic Load Balancer) is a managed load balancer.
3 kinds of load balancers offered by AWS:
1• Application Load Balancer (HTTP / HTTPS only) – Layer 7
2• Network Load Balancer (ultra-high performance, allows for TCP) – Layer 4
3• Classic Load Balancer (slowly retiring) – Layer 4 & 7
Gateway Load balancer: Layer 3 Gateway + Layer 4 Load Balancing, Gateway Load Balancer helps you easily deploy, scale, and manage your third-party virtual appliances. It gives you one gateway for distributing traffic across multiple virtual appliances while scaling them up or down, based on demand. Gateway Load Balancer runs within one AZ.
ASG: Automatic Scaling Group:
Vertical Scalability –> Increasing in EC2 instance like from t2.micro to m5.large (scale up or down)
Horizontal Scalability –> Adding more similar type of Instance (scale in/out)
Auto Scaling Groups – Scaling Strategies:
1)Manual Scaling: Update the size of an ASG manually
2)Dynamic Scaling: Respond to changing demand
- A) Simple/Step Scaling B) Target Tracking Scaling C) Scheduled Scaling D) Predictive Scaling
Amazon S3 (Simple Storage Service): globally unique name but S3 buckets are defined at regional level.
S3 Bucket object: Max Object Size is 5TB (5000GB). If uploading more than 5GB, must use “multi-part upload”.
Note: an IAM principal can access an S3 object if the user IAM permissions allow it OR the resource policy ALLOWS it AND there’s no explicit DENY
S3 Bucket policies: JSON documents like IAM policies.
S3 Pre-sign URL: A pre-signed URL is a URL that you can provide to your users to grant temporary access to a specific S3 object
S3 Websites: S3 can host static websites and have them accessible on the www. If you get a 403 (Forbidden) error, make sure the bucket policy allows public reads!
S3 Versioning: Enabled at bucket level. Any file that is not versioned prior to enabling versioning will have version “null”. Suspending versioning does not delete the previous versions
S3 Replication (CRR & SRR): Cross Region replication and Same Region Replication
Must enable versioning in source and destination. Buckets can be in different accounts
Copying is asynchronous. Must give proper IAM permissions to S3
CRR – Use cases: compliance, lower latency access, replication across accounts
SRR – Use cases: log aggregation, live replication between production and test accounts
AWS Policy Generator: To generate policies which JSON documents.
S3 Storage Classes
1)Amazon S3 Standard – General Purpose: 99.99% Availability, Used for frequently accessed data, low latency and high throughput. Use Cases: Big Data analytics, mobile & gaming applications, content distribution.
2)Amazon S3 Standard-Infrequent Access (IA): For data that is less frequently accessed but requires rapid access when needed. Lower cost than S3 Standard. 99.9% Availability. Use cases: Disaster Recovery, backups.
3)Amazon S3 One Zone-Infrequent Access: High durability (99.999999999%) in a single AZ; data lost when AZ is destroyed. 99.5% Availability.
Use Cases: Storing secondary backup copies of on-premise data, or data you can recreate
Amazon S3 Glacier Storage Classes: -Low-cost object storage meant for archiving / backup. Pricing: price for storage + object retrieval cost
4)Amazon S3 Glacier Instant Retrieval: Millisecond retrieval, great for data accessed once a quarter, minimum storage duration of 90 days
5)Amazon S3 Glacier Flexible Retrieval: Expedited (1 to 5 minutes), Standard (3 to 5 hours), Bulk (5 to 12 hours) – free. Minimum storage duration of 90 days
6)Amazon S3 Glacier Deep Archive: for long term storage. Standard (12 hours), Bulk (48 hours). Minimum storage duration of 180 days
7)Amazon S3 Intelligent Tiering: Small monthly monitoring and auto-tiering fee. Moves objects automatically between Access Tiers based on usage. There are no retrieval charges in S3 Intelligent-Tiering
- A) Frequent Access tier (automatic):default tier
- B) Infrequent Access tier (automatic):objects not accessed for 30 days
- C) Archive Instant Access tier (automatic): objects not accessed for 90 days
- D) Archive Access tier (optional):configurable from 90 days to 700+ days
- E) Deep Archive Access tier (optional): from 180 days to 700+ days
S3 Object Lock: Adopt a WORM (Write Once Read Many) model. Block an object version deletion for a specified amount of time.
Glacier Vault Lock: Adopt a WORM (Write Once Read Many) model. Lock the policy for future edits (can no longer be changed). Helpful for compliance and data retention.
S3 Encryption – no encryption, server-side encryption and client-side encryption
AWS Snow Family: Highly-secure, portable devices to collect and process data at the edge, and migrate data into and out of AWS.
AWS Snow Family are offline devices to perform data migrations If it takes more than a week to transfer over the network, use Snowball devices.
1) AWS Snowcone: Small, portable computing, anywhere, rugged & secure, withstands harsh environments. 8 TBs of usable storage. Can be sent back to AWS offline, or connect it to Internet and use AWS DataSync to send data.
2) Snowball Edge (for data transfers): Physical data transport solution: move TBs or PBs of data in or out of AWS. Provide block storage and Amazon S3-compatible object storage
- A) Snowball Edge Storage Optimized:80 TB of HDD capacity for block volume and S3 compatible object storage
- B) Snowball Edge Compute Optimized:42 TB of HDD capacity for block volume and S3 compatible object storage
3)AWS Snowmobile: Transfer exabytes of data. Each Snowmobile has 100 PB of capacity. Better than Snowball if you transfer more than 10 PB.
AWS OpsHub: Is a software that you install on your computer to manage your snow family.
AWS Storage Gateway: Bridge between on-premise data and cloud data in S3. Hybrid storage service to allow on-premises to seamlessly use the AWS Cloud
3 types are : 1) File Gateway 2) Volume Gateway 3) Tape Gateway
1) RDS: Relational Database, SQL database. Postgres, MySQL, MariaDB, Oracle, Microsoft SQL Server, Aurora (AWS Proprietary database)
RDS is a managed service:
- Automated provisioning, OS patching
- Continuous backups and restore to specific timestamp (Point in Time Restore)!
- Monitoring dashboards
- Read replicas for improved read performance
- Multi AZ setup for DR (Disaster Recovery)
- Maintenance windows for upgrades
- Scaling capability (vertical and horizontal)
- Storage backed by EBS (gp2 or io1)
RDS Deployments: Read Replicas– scale your read workloads & Multi-AZ- Failover in case of AZ outage.
2) Amazon Aurora: Relational Database. Aurora is a proprietary technology from AWS. PostgreSQL and MySQL are both supported as Aurora DB.
Aurora is “AWS cloud optimized” and claims 5x performance improvement over MySQL on RDS, over 3x the performance of Postgres on RDS
Aurora storage automatically grows in increments of 10GB, up to 64 TB. Aurora costs more than RDS (20% more) – but is more efficient
3) Amazon ElastiCache: In-memory and relational database with high performance, low latency. ElastiCache is to get managed Redis or Memcached. Helps reduce load off databases for read intensive workloads.
4) DynamoDB: NoSQL, Non-relational, key-value databases. Fully Managed Highly available with replication across 3 AZ. Scales to massive workloads, distributed “serverless” database. Millions of requests per seconds, trillions of row, 100s of TB of storage
Fast and consistent in performance. Single-digit millisecond latency – low latency retrieval. Standard & Infrequent Access (IA) Table Class
DynamoDB Accelerator – DAX: Fully Managed in-memory cache for DynamoDB. Secure, highly scalable & highly available.
10x performance improvement – single digit millisecond latency to microseconds latency – when accessing your DynamoDB tables.
Difference with ElastiCache: DAX is only used for and is integrated with DynamoDB, while ElastiCache can be used for other databases
DynamoDB – Global Tables: Make a DynamoDB table accessible with low latency in multiple-regions. Active-Active replication (read/write to any AWS Region).
5) Redshift: It’s OLAP – online analytical processing used for analytics and data warehousing. Columnar storage of data, Massively Parallel Query Execution (MPP), highly available.
Load data once every hour, not every second. 10x better performance than other data warehouses, scale to PBs of data.
6) Amazon EMR -Elastic Map Reduce: EMR helps creating Hadoop clusters (Big Data) to analyze and process vast amount of data.
Also supports Apache Spark, HBase, Presto, Flink. Use cases: data processing, machine learning, web indexing, big data
7) Amazon Athena: Serverless query service to analyze data stored in Amazon S3. Uses standard SQL language to query the files.
Built on Presto, use compressed or columnar data for cost-savings. Use cases: Business intelligence / analytics / reporting, analyze & query VPC Flow Logs, ELB Logs, CloudTrail trails, etc.
Exam Tip: analyze data in S3 using serverless SQL, use Athena
8) Amazon QuickSight: Serverless machine learning-powered business intelligence service to create interactive dashboards. Fast, automatically scalable, embeddable, with per-session pricing.
Use cases: Business analytics, building visualizations, perform ad-hoc analysis, Get business insights using data
9) DocumentDB: DocumentDB is AWS-implementation of mongoDB. MongoDB is used to store, query, and index JSON data
10) Amazon Neptune: Fully managed graph database. A popular graph dataset would be a social network. Great for knowledge graphs (Wikipedia), fraud detection, recommendation engines, social networking.
11)Amazon QLDB (Quantum Ledger Database): used for recording financial transactions (no decentralization component). Used to review history of all the changes made to your application data over time.
Immutable system: no entry can be removed or modified, cryptographically verifiable.
11) Amazon Managed Blockchain: Blockchain makes it possible to build applications where multiple parties. can execute transactions without the need for a trusted, central authority.
- Amazon Managed Blockchain is a managed service to: Join public blockchain networks Or create your own scalable private network.
- Compatible with the frameworks Hyperledger Fabric & Ethereum
12) AWS Glue: Managed extract, transform, and load (ETL) service, fully serverless service. Glue Data Catalog: catalog of datasets
DMS – Database Migration Service: Quickly and securely migrate databases to AWS, resilient, self-healing. The source database remains available during the migration.
Homogeneous migrations: ex Oracle to Oracle and Heterogeneous migrations: ex Microsoft SQL Server to Aurora.
ECS: Elastic Container Service – Launch Docker containers on AWS. You must provision & maintain the infrastructure (the EC2 instances).
ECR: Elastic Container Registry where containers are stored. This is Private Docker Registry on AWS.
Fargate: Serverless offering, used to launch containers. You do not provision the infrastructure (no EC2 instances to manage)
AWS Lambda: Function as a Service(FaaS), serverless. Virtual functions – no servers to manage. for short executions, runs on demand and scaling is automatic. Invocation time is up to 15 mins.
Lambda Container Images: The container image must implement the lambda runtime API.
Event-Driven: functions get invoked by AWS when needed
AWS Lambda Pricing: 1) Pay per call 2) Pay per duration.
It is usually very cheap to run AWS Lambda so it’s very popular
Amazon API Gateway: Serverless, scalable, fully managed service for developers to easily create, publish, maintain, monitor, and secure APIs
Supports RESTful APIs and WebSocket APIs. Support for security, user authentication, API throttling, API keys, monitoring.
AWS Batch: Fully managed batch processing at any scale. Batch will dynamically launch EC2 instances or Spot Instances. A “batch” job is a job with a start and an end. Batch jobs are defined as Docker images and run on ECS.
Amazon Lightsail: Virtual servers, storage, databases, and networking, low & predictable pricing. Simpler alternative to using EC2, RDS, ELB, EBS, Route 53. Great for people with little cloud experience.
CloudFormation (IaaS): CloudFormation is a declarative way of outlining your AWS Infrastructure, for any resources. For example, within a CloudFormation template, you say: I want a security group, two EC2 instances using this security group, an S3 bucket and I want a load balancer (ELB) in front of these machines.
Then CloudFormation creates those for you, in the right order, with the exact configuration that you specify.
The code is “compiled” into a CloudFormation template (JSON/YAML). You can therefore deploy infrastructure and application runtime code together.
AWS Elastic Beanstalk: Elastic Beanstalk is a developer centric view of deploying an application on AWS. Beanstalk is Platform as a Service (PaaS) and is free but you pay for the underlying instances.
Three architecture models:
- Single Instance deployment: good for dev
- LB + ASG: great for production or pre-production web applications
- ASG only: great for non-web apps in production (workers, etc..)
Elastic Beanstalk – Health Monitoring: Health agent pushes metrics to CloudWatch. Checks for app health, publishes health events
AWS CodeDeploy: is a hybrid service which deploy our application on EC2 instances and on-premise servers.
AWS CodeCommit: is a Source-control service that hosts Git-based repositories to store code.
AWS CodeBuild: Code building service in the cloud. It compiles source code, run tests, and produces packages that are ready to be deployed.
AWS CodePipeline: Orchestrate the different steps to have the code automatically pushed to production. Code => Build => Test => Provision => Deploy. Basis for CICD (Continuous Integration & Continuous Delivery)
Fast delivery & rapid updates. Fully managed, compatible with CodeCommit, CodeBuild, CodeDeploy, Elastic Beanstalk, CloudFormation, GitHub, 3rd-party services (GitHub…) & custom plugins.
AWS CodeArtifact: CodeArtifact is a secure, scalable, and cost-effective artifact management for software development. Storing and retrieving code dependencies is called artifact management
Developers and CodeBuild can then retrieve dependencies straight from CodeArtifact.
AWS CodeStar: Unified UI to easily manage software development activities in one place. “Quick way” to get started to correctly set-up CodeCommit, CodePipeline, CodeBuild, CodeDeploy, Elastic Beanstalk, EC2, etc.
AWS Cloud9: AWS Cloud9 is a cloud IDE (Integrated Development Environment) for writing, running and debugging code. A cloud IDE can be used within a web browser, meaning you can work on your projects from your office, home, or anywhere with internet with no setup necessary. AWS Cloud9 also allows for code collaboration in real-time (pair programming)
AWS Systems Manager (SSM): is a hybrid service, helps you manage your EC2 and On-Premises systems at scale. Get operational insights about the state of your infrastructure.
Features are: Patching automation for enhanced compliance, Run commands across an entire fleet of servers
Systems Manager – SSM Session Manager: hybrid service, allows you to start a secure shell on your EC2 and on-premises servers.
No SSH access, bastion hosts, or SSH keys needed. No port 22 needed (better security). Send session log data to S3 or CloudWatch Logs.
AWS OpsWorks: Managed Chef & Puppet help you perform server configuration automatically, or repetitive actions.
DDOS Protection on AWS:
1) AWS Shield 2) AWS WAF 3) AWS Cloudfront 4) AWS Route 53
AWS Shield: protects from DDOS Attack
1) AWS Shield Standard: Free service that is activated for every AWS customer. Provides protection from attacks such as SYN/UDP Floods, Reflection attacks and other layer 3/layer 4 attacks
2) AWS Shield Advanced: 24/7 premium DDoS protection, AWS DDoS response team (DRP). Optional service. Protect against higher fees during usage spikes due to DDoS.
Protect against more sophisticated attack on Amazon EC2, Elastic Load Balancing (ELB), Amazon CloudFront, AWS Global Accelerator, and Route 53
AWS WAF (Web Application Firewall) : Filter specific requests based on rules. Protects your web applications from common web exploits (Layer 7-HTTP). Deploy on Application Load Balancer, API Gateway, CloudFront.
Protects from common attack – SQL injection and Cross-Site Scripting (XSS). Size constraints, geo-match (block countries).
Rate-based rules (to count occurrences of events) – for DDoS protection
Penetration Testing: AWS customers are welcome to carry out security assessments or penetration tests against their AWS infrastructure without prior approval for 8 services. For other AWS services penetration, customer needs to take approval from AWS.
1)Amazon EC2 instances, NAT Gateways, and Elastic Load Balancers
5)Amazon API Gateways
7)AWS Lambda and Lambda Edge functions
8)Amazon Lightsail resources
9)Amazon Elastic Beanstalk environments
Prohibited Actions During Penetration Testing are:
- DNS zone walking via Amazon Route 53 Hosted Zones
- Denial of Service (DoS), Distributed Denial of Service, Simulated DoS, Simulated DDoS
- Port flooding
- Protocol flooding
- Request flooding (login request flooding, API request flooding)
AWS KMS- Key Management Service: AWS KMS is a managed service that enables you to easily create and control the keys used for cryptographic operations. The service provides a highly available key generation, storage, management, and auditing solution
For you to encrypt or digitally sign data within your own applications or control the encryption of data across AWS services.
CloudHSM: provision encryption hardware which can be used by customer to store keys. HSM device is tamper resistant, FIPS 140-2 Level 3 compliance.
AWS Certificate Manager (ACM): AWS Certificate Manager is a service that lets you easily provision, manage, and deploy public and private Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificates for use with AWS services and your internal connected resources.
AWS Secrets Manager: Newer service, meant for storing secrets. Capability to force rotation of secrets every X days. Automate generation of secrets on rotation (uses Lambda)
Integration with Amazon RDS (MySQL, PostgreSQL, Aurora). Secrets are encrypted using KMS
AWS Artifact: Portal that provides customers with on-demand access to AWS compliance documentation and AWS agreements. Can be used to support internal audit or compliance
- Artifact Reports– Allows you to download AWS security and compliance documents from third-party auditors, like AWS ISO certifications, Payment Card Industry (PCI), and System and Organization Control (SOC) reports.
- Artifact Agreements– Allows you to review, accept, and track the status of AWS agreements such as the Business Associate Addendum (BAA) or the Health Insurance Portability and Accountability Act (HIPAA) for an individual account or in your organization.
Amazon GuardDuty: Intelligent Threat discovery to Protect AWS Account. Uses Machine Learning algorithms, anomaly detection, 3rd party data
Can protect against CryptoCurrency attacks (has a dedicated “finding” for it)
Input data includes:
- CloudTrail Events Logs– unusual API calls, unauthorized deployments
- CloudTrail Management Events – create VPC subnet, create trail, …
- CloudTrail S3 Data Events – get object, list objects, delete object, …
- VPC Flow Logs– unusual internal traffic, unusual IP address
- DNS Logs– compromised EC2 instances sending encoded data within DNS queries
- Kubernetes Audit Logs– suspicious activities and potential EKS cluster compromises
Amazon Inspector: Automated Security Assessments.
1) For EC2 instances: Leveraging the AWS System Manager (SSM) agent. Analyze against unintended network accessibility. Analyze the running OS against known vulnerabilities
2) For Containers push to Amazon ECR: Assessment of containers as they are pushed
AWS Config: Helps with auditing and recording compliance of your AWS resources. Helps record configurations and changes over time.
Amazon Macie: Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover and protect your sensitive data in AWS. Macie helps identify and alert you to sensitive data, such as personally identifiable information (PII)
AWS Security Hub:
- Central security tool to manage security across several AWS accounts and automate security checks
- Integrated dashboards showing current security and compliance status to quickly take actions
- Automatically aggregates alerts in predefined or personal findings formats from various AWS services & AWS partner tools:
- GuardDuty, Inspector, Macie, IAM Access Analyzer, AWS Systems Manager, AWS Firewall Manager, AWS Partner Network Solutions
- Must first enable the AWS Config Service
Amazon Detective: analyzes, investigates, and quickly identifies the root cause of security issues or suspicious activities (using ML and graphs)
- Automatically collects and processes events from VPC Flow Logs, CloudTrail, GuardDuty and create a unified view
- Produces visualizations with details and context to get to the root cause
AWS Abuse: Report suspected AWS resources used for abusive or illegal purposes.
The AWS acceptable use policy(aup) is a policy that is applicable to all the customers of AWS cloud services. The policy states:
No illegal, Harmful or Offensive use or Content
No security violations
No Network abuse
No E-mail or Message Abuse
Every organization will have to adhere to these rules when shifting their organization and it’s applications to AWS cloud.
Internet Gateways helps our VPC instances connect with the internet. Connect public subnet instances to internet over public network.
NAT Gateways (AWS-managed) & NAT Instances (self-managed) allow your instances in your Private Subnets to access the internet while remaining private.
NACL (Network ACL): Stateless, subnet rules for inbound and outbound
- A firewall which controls traffic from and to subnet
- Can have ALLOW and DENY rules.
- Are attached at the Subnet level
- Rules only include IP addresses
Security Groups: Stateful, operate at the EC2 instance level or ENI
- A firewall that controls traffic to and from an ENI / an EC2 Instance
- Can have only ALLOW rules
- Rules include IP addresses and other security group
Subnets: Tied to an AZ, network partition of the VPC
VPC Flow Logs: Capture information about IP traffic going into your interfaces: VPC Flow Logs, Subnet Flow Logs, Elastic Network Interface Flow Logs
VPC Flow logs data can go to S3 / CloudWatch Logs.
VPC Peering: Connect two VPC, privately using AWS’ network. Must not have overlapping CIDR (IP address range).
VPC Peering connection is not transitive (must be established for each VPC that need to communicate with one another)
VPC Endpoints: Endpoints allow you to connect to AWS Services using a private network instead of the public www network
- This gives you enhanced security and lower latencyto access AWS services
- VPC Endpoint Gateway: S3 & DynamoDB
- VPC Endpoint Interface: the rest of aws services.
Site to Site VPN: VPN over public internet between on-premises DC and AWS
- Connect an on-premises VPN to AWS
- The connection is automatically encrypted
- Goes over the public internet
- On-premises: must use a Customer Gateway (CGW) and AWS: must use a Virtual Private Gateway (VGW) for Site-to-site VPN.
Direct Connect (DX): direct private connection to AWS
- Establish a physical connectionbetween on-premises and AWS
- The connection is private, secure and fast
- Goes over a private network
- Takes at least a monthto establish
Transit Gateway: For having transitive peering between thousands of VPC and on-premises, hub-and-spoke (star) connection.
Works with Direct Connect Gateway, VPN connections.
AWS GLOBAL INFRASTRUCTURE:
Global DNS: Route 53:
- Great to route users to the closest deployment with least latency
- Great for disaster recovery strategies
- Route53 is a Managed DNS (Domain Name System)
- DNS is a collection of rules and records which helps clients understand how to reach a server through URLs.
- In AWS, the most common records are:
- www.google.com => 220.127.116.11 == A record(IPv4)
- www.google.com => 2001:0db8:85a3:0000:0000:8a2e:0370:7334 == AAAAIPv6
- search.google.com => www.google.com == CNAME: hostname to hostname
- example.com => AWS resource == Alias(ex: ELB, CloudFront, S3, RDS, etc…)
- Route 53 Routing Policies:
1) Simple Routing Policy: No Health Check.
2) Weighted Routed Policy: Health Check, can route the request based on % defined. Like 70% to region A, 30% to region B.
3) Latency Routing Policy: Health Check, user close to Mumbai will be connect to Mumbai region rather than connecting to Paris region.
4) Failure Routing Policy: Health Check, Disaster Recovery.
Global Content Delivery Network (CDN): CloudFront –
- Replicate part of your application to AWS Edge Locations – decrease latency
- Cache common requests– improved user experience and decreased latency
- Improves read performance, content is cached at the edge.
- DDoS protection(because worldwide), integration with Shield, AWS Web Application Firewall
- A) S3 Bucket:
- For distributing files and caching them at the edge
- Enhanced security with CloudFront Origin Access Identity (OAI)
- CloudFront can be used as an ingress (to upload files to S3)
- B) Custom Origin (HTTP):
- Application Load Balancer
- EC2 instance
- S3 website (must first enable the bucket as a static S3 website)
- Any HTTP backend you want
CloudFront vs S3 Cross Region Replication
S3 Transfer Acceleration:
- Accelerate global uploads & downloads into Amazon S3.
- Increase transfer speed by transferring file to an AWS edge location which will forward the data to the S3 bucket in the target region
AWS Global Accelerator:
- Improve global application availability and performance using the AWS global network
- Leverage the AWS internal network to optimize the route to your application(60% improvement)
- 2 Anycast IP are createdfor your application and traffic is sent through Edge Locations
- The Edge locations send the traffic to your application
- AWS Outposts are “server racks” that offers the same AWS infrastructure, services, APIs & tools to build your own applications on-premises just as in the cloud
- AWS will setup and manage “Outposts Racks” within your on-premises infrastructureand you can start leveraging AWS services on-premises
- You are responsible for the Outposts Rack physical security.
- Low-latency access to on-premises systems
- Local data processing
- Data residency
- Easier migration from on-premises to the cloud
- Fully managed service
Some services that work on Outposts: Amazon EC2, Amazon EBS, Amazon S3, Amazon EKS, Amazon ECS, Amazon RDS, Amazon EMR.
- WaveLength Zones are infrastructure deployments embedded within the telecommunicationsproviders’ datacenters at the edge of the 5G networks
- Brings AWS services to the edge of the 5G networks
- Use cases: Smart Cities, ML-assisted diagnostics, Connected Vehicles, Interactive Live Video Streams, AR/VR, Real-time Gaming, …
AWS Local Zones:
- Places AWS compute, storage, database, and other selected AWS services closer to end users to run latency-sensitive applications
- Extend your VPC to more locations – “Extension of an AWS Region”
Amazon CloudWatch Metrics
- CloudWatch provides metricsfor every service in AWS
- Metric is a variable to monitor (CPUUtilization, NetworkIn…)
- Metrics have timestamps
- Can create CloudWatch dashboards of metrics
- CloudWatch Billing metric only available in us-east-1 for billing alerts.
- EC2 instances: CPU Utilization, Status Checks, Network (not RAM)
- Default metrics every 5 minutes
- Option for Detailed Monitoring ($$$): metrics every 1 minute
- EBS volumes: Disk Read/Writes
- S3 buckets: BucketSizeBytes, NumberOfObjects, AllRequests
- Billing: Total Estimated Charge (only in us-east-1)
- ServiceLimits: how much you’ve been using a service API
- Custom metrics: push your own metrics
Amazon CloudWatch Alarms
- Alarms are used to trigger notifications for any metric
- Alarms actions…
- Auto Scaling:increase or decrease EC2 instances “desired” count
- EC2 Actions:stop, terminate, reboot or recover an EC2 instance
- SNS notifications:send a notification into an SNS topic
- Example: create a billing alarm on the CloudWatch Billing metric
- Alarm States: OK. INSUFFICIENT_DATA, ALARM
Amazon CloudWatch Logs:
- CloudWatch Logs can collect log from:
- Elastic Beanstalk: collection of logs from application
- ECS: collection from containers
- AWS Lambda: collection from function logs
- CloudTrailbased on filter
- CloudWatch log agents: on EC2 machines or on-premises servers
- Route53: Log DNS queries
- Enables real-time monitoring of logs
- Adjustable CloudWatch Logs retention
CloudWatch Logs for EC2:
- By default, no logs from your EC2 instance will go to CloudWatch
- You need to run a CloudWatch agent on EC2to push the log files you want
- Make sure IAM permissions are correct
- The CloudWatch log agent can be setup on-premises too
Amazon CloudWatch Events:
- Schedule: Cron jobs (scheduled scripts)
- Event Pattern: Event rules to react to a service doing something
- Trigger Lambda functions, send SQS/SNS messages
Amazon EventBridge (Formerly known as CloudWatch Events):
- EventBridge is the next evolution of CloudWatch Events
- Default event bus: generated by AWS services (CloudWatch Events)
- Partner event bus: receive events from SaaS service or applications (Zendesk, DataDog, Segment, Auth0…)
- Custom Event buses: for your own applications
- Schema Registry:model event schema
- EventBridge has a different name to mark the new capabilities
- Provides governance, compliance, and auditfor your AWS Account
- CloudTrail is enabled by default!
- Get an history of events / API callsmade within your AWS Account by: • Console, • SDK, • CLI, • AWS Services
- Can put logs from CloudTrail into CloudWatch Logs or S3
- A trail can be applied to All Regions (default) or a single Region.
- If a resource is deleted in AWS, investigate CloudTrail first!
- A) Management Events:
- Operations that are performed on resources in your AWS account
- Configuring security (IAM AttachRolePolicy)
- Configuring rules for routing data (Amazon EC2 CreateSubnet)
- Setting up logging (AWS CloudTrail CreateTrail)
- By default, trails are configured to log management events.
- Can separate Read Events (that don’t modify resources) from Write Events (that may modify resources)
- B) Data Events:
- By default, data events are not logged(because high volume operations)
- Amazon S3 object-level activity (ex: GetObject, DeleteObject, PutObject): can separate Read and Write Events
- AWS Lambda function execution activity (the Invoke API)
- C) CloudTrail Insights Events:
- Enable CloudTrail Insights to detect unusual activityin your account:
- inaccurate resource provisioning
- hitting service limits
- Bursts of AWS IAM actions
- Gaps in periodic maintenance activity
- CloudTrail Insights analyzes normal management events to create a baseline.
- And then continuously analyzes write events to detect unusual patterns.
- Anomalies appear in the CloudTrail console
- Event is sent to Amazon S3
- An EventBridge event is generated (for automation needs)
CloudTrail Events Retention:
- Events are stored for 90 daysin CloudTrail
- To keep events beyond this period, log them to S3 and use Athena
AWS X-Ray: trace requests made through your distributed applications
AWS X-Ray advantages
- Troubleshooting performance (bottlenecks)
- Understand dependencies in a microservice architecture
- Pinpoint service issues
- Review request behavior
- Find errors and exceptions
- Are we meeting time SLA?
- Where I am throttled?
- Identify users that are impacted
- An ML-powered service for automated code reviews and application performance recommendations
- Provides two functionalities
- CodeGuru Reviewer:automated code reviews for static code analysis (development)
- Identify critical issues, security vulnerabilities, and hard-to-find bugs
- Example: common coding best practices, resource leaks, security detection, input validation
- Uses Machine Learningand automated reasoning
- Hard-learned lessons across millions of code reviews on 1000s of open-source and Amazon repositories
- Supports Java and Python
- Integrates with GitHub, Bitbucket, and AWS CodeCommit
- CodeGuru Profiler:visibility/recommendations about application performance during runtime (production).
- Helps understand the runtime behavior of your application
- Example: identify if your application is consuming excessive CPU capacityon a logging routine
- Identify and remove code inefficiencies
- Improve application performance (e.g., reduce CPU utilization)
- Decrease compute costs
- Provides heap summary (identify which objects using up memory)
- Anomaly Detection
- Support applications running on AWS or on-premise
- Minimal overhead on application
- Service Health Dashboard:status of all AWS services across all regions
- Shows all regions, all services health
- Shows historical information for each day
- Personal Health Dashboard:AWS events that impact your infrastructure
- AWS Personal Health Dashboard provides alerts and remediation guidancewhen AWS is experiencing events that may impact you.
- Personal Health Dashboard gives you a personalized viewinto the performance and availability of the AWS services underlying your AWS resources.
- The dashboard displays relevant and timely information to help you manage events in progress and provides proactive notification to help you plan for scheduled activities.
Account Management, Billing & Support Section:
- Global service
- Allows to manage multiple AWS accounts
- The main account is the master accountand other accounts called child accounts.
- Cost Benefits:
- Consolidated Billing across all accounts– single payment method
- Pricing benefits from aggregated usage(volume discount for EC2, S3…)
- Pooling of Reserved EC2 instances for optimal savings
- API is available to automate AWS account creation
- Restrict account privileges using Service Control Policies (SCP)
Multi Account Strategies:
- Create accounts per department, per cost center, per dev/ test/ prod, based on regulatory restrictions (using SCP), for better resource isolation (ex: VPC), to have separate per-account service limits, isolated account for logging
Service Control Policies (SCP):
- Whitelist or blacklist IAM actions
- Applied at the OU or Account level
- Does not apply to the Master Account
- SCP is applied to all the Users and Roles of the Account, including Root user
- The SCP does not affect service-linked roles-> Service-linked roles enable other AWS services to integrate with AWS Organizations and can’t be restricted by SCPs.
- SCP must have an explicit Allow (does not allow anything by default)
- Use cases:
- Restrict access to certain services(for example: can’t use EMR)
- Enforce PCI compliance by explicitly disabling services
AWS Organization – Consolidated Billing:
- When enabled, provides you with:
- Combined Usage– combine the usage across all AWS accounts in the AWS Organization to share the volume pricing, Reserved Instances and Savings Plans discounts
- One Bill– get one bill for all AWS Accounts in the AWS Organization
- The management account can turn off Reserved Instances discount sharing for any account in the AWS Organization, including itself
AWS Control Tower:
- Easy way to set up and govern a secure and compliant multi-account AWS environment based on best practices
- Automate the set up of your environmentin a few clicks
- Automate ongoing policy managementusing guardrails
- Detect policy violations and remediate them
- Monitor compliance through an interactive dashboard
- AWS Control Tower runs on top of AWS Organizations:It automatically sets up AWS Organizationsto organize accounts and implement SCPs (Service Control Policies)
Pricing Models in AWS:
- AWS has 4 pricing models:
1) Pay as you go: pay for what you use, remain agile, responsive, meet scale demands
2) Save when you reserve: minimize risks, predictably manage budgets, comply with long-terms requirements. Reservations are available for EC2 Reserved Instances, DynamoDB Reserved Capacity, ElastiCache Reserved Nodes, RDS Reserved Instance, Redshift Reserved Nodes
3) Pay less by using more: volume-based discounts
4) Pay less as AWS grows
- Commit a certain $ amount per hour for 1 or 3 years
- Easiest way to setup long-term commitments on AWS
1• EC2 Savings Plan:
- Up to 72% discountcompared to On-Demand
- Commit to usage of individual instance families in a region(e.g. C5 or M5)
- Regardless of AZ, size (m5.xl to m5.4xl), OS (Linux/Windows) or tenancy
- All upfront, partial upfront, no upfront
2• Compute Savings Plan:
- Up to 66% discountcompared to On-Demand
- Regardless of Family, Region, size, OS, tenancy, compute options
- Compute Options: EC2, Fargate, Lambda
AWS Compute Optimizer:
- Reduce costs and improve performance by recommending optimal AWS resources for your workloads
- Helps you choose optimal configurations and rightsizeyour workloads (over/under provisioned)
- Uses Machine Learningto analyze your resources’ configurations and their utilization CloudWatch metrics
- Supported resources – EC2 instances, EC2 Auto Scaling Groups, EBS volumes, Lambda functions
- Lower your costs by up to 25%
- Recommendations can be exported to S3
Billing and Costing Tools:
- Estimating costs in the cloud:
Pricing Calculator: Estimate the cost for your solution architecture
- Tracking costs in the cloud:
1) Billing Dashboard: high level overview + free tier dashboard
2) Cost Allocation Tags: Use cost allocation tags to track your AWS costs on a detailed level.
AWS generated tags: • Automatically applied to the resource you create • Starts with Prefix aws: (e.g. aws: createdBy)
User-defined tags: • Defined by the user • Starts with Prefix user
3) Cost and Usage Reports: The AWS Cost & Usage Report contains the most comprehensive set of AWS cost and usage data available, including additional metadata about AWS services, pricing, and reservations (e.g., Amazon EC2 Reserved Instances (RIs)).
- The AWS Cost & Usage Reportlists AWS usage for each service category used by an account and its IAM users in hourly or daily line items, as well as any tags that you have activated for cost allocation purposes.• Can be integrated with Athena, Redshift or QuickSight
4) Cost Explorer:
- Visualize, understand, and manage your AWS costsand usage over time
- Create custom reports that analyze cost and usage data.
- Analyze your data at a high level: total costs and usage across all accounts
- Or Monthly, hourly, resource level granularity
- Choose an optimal Savings Plan(to lower prices on your bill)
- Forecast usage up to 12 months based on previous usage
**Cost Explorer – Savings Plan Alternative to Reserved Instances
**Cost Explorer – Forecast Usage
- Monitoring against costs plans:
1) Billing Alarms: • Billing data metric is stored in CloudWatch us-east-1 • Billing data are for overall worldwide AWS costs
- It’s for actual cost, not for projected costs • Intended a simple alarm
2) Budgets: • Create budget and send alarms when costs exceed the budget
- 3 types of budgets: Usage, Cost, Reservation
- For Reserved Instances (RI): Track utilization, Supports EC2, ElastiCache, RDS, Redshift
- Up to 5 SNSnotifications per budget
- Can filter by: Service, Linked Account, Tag, Purchase Option, Instance Type, Region, Availability Zone, API Operation, etc…
- Same options as AWS Cost Explorer!
- 2 budgets are free, then $0.02/day/budget
Trusted Advisor: • No need to install anything – high level AWS account assessment
- Analyze your AWS accounts and provides recommendationon 5 categories
1) Cost optimization
4) Fault tolerance
5) Service limits (Service Quotas)
AWS Support Plans:
- AWS Basic Support Plan: free
- Customer Service & Communities– 24×7 access to customer service, documentation, whitepapers, and support forums.
- AWS Trusted Advisor– Access to the 7 core Trusted Advisor checksand guidance to provision your resources following best practices to increase performance and improve security.
- AWS Personal Health Dashboard– A personalized view of the health of AWS services, and alerts when your resources are impacted.
- AWS Developer Support Plan:
All Basic Support Plan +.
- Business hours email access to Cloud Support Associates
- Unlimited cases / 1 primary contact
- Case severity / response times:
- General guidance: < 24 business hours
- System impaired: < 12 business hours
- AWS Business Support Plan (24/7):
- Intended to be used if you have production workloads
- Trusted Advisor – Full set of checks + API access
- 24×7 phone, email, and chat access to Cloud Support Engineers
- Unlimited cases / unlimited contacts
- Access to Infrastructure Event Management for additional fee.
- Case severity / response times:
- General guidance: < 24 business hours
- System impaired: < 12 business hours
- Production system impaired: < 4 hours
- Production system down: < 1 hour
- AWS Enterprise On-Ramp Support Plan (24/7):
- Intended to be used if you have production or business critical workloads
- All of Business Support Plan +
- Access to a pool of Technical Account Managers (TAM)
- Concierge Support Team(for billing and account best practices)
- Infrastructure Event Management, Well-Architected & Operations Reviews
- Case severity / response times:
- Production system impaired: < 4 hours
- Production system down: < 1 hour
- Business-critical system down: < 30 minutes
- AWS Enterprise Support Plan (24/7):
- Intended to be used if you have mission critical workloads
- All of Business Support Plan +
- Access to a designatedTechnical Account Manager (TAM)
- Concierge Support Team(for billing and account best practices)
- Infrastructure Event Management, Well-Architected & Operations Reviews
- Case severity / response times:
- Production system impaired: < 4 hours
- Production system down: < 1 hour
- Business-critical system down: < 15 minutes
AWS Architecting & Ecosystem:
Well Architected Framework – General Guiding Principles:
- Stop guessing your capacity needs
- Test systems at production scale
- Automate to make architectural experimentation easier
- Allow for evolutionary architectures: Design based on changing requirements:
- Drive architectures using data
- Improve through game days: Simulate applications for flash sale days
AWS Cloud Best Practices – Design Principles
- Scalability: vertical & horizontal
- Disposable Resources:servers should be disposable & easily configured
- Automation: Serverless, Infrastructure as a Service, Auto Scaling…
- Loose Coupling: • Break it down into smaller, loosely coupled components. A change or a failure in one component should not cascade to other components
- Services, not Servers: • Don’t use just EC2 • Use managed services, databases, serverless, etc
Well Architected Framework -6 Pillars:
1) Operational Excellence
4) Performance Efficiency
5) Cost Optimization
***They are not something to balance, or trade-offs, they’re a synergy***
1) Operational Excellence
- Includes the ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures
- Perform operations ascode– Infrastructure as code e.g Cloud Formation
- Annotate documentation– Automate the creation of annotated documentation
after every build
- Make frequent, small, reversible changes– So that in case of any failure, you can
- Refine operations procedures frequently– And ensure that team members are
familiar with it
- Anticipate failure
- Learn from all operational failures
- Includes the ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies
- Design Principles
- Implement a strong identity foundation– Centralize privilege management and reduce (or even
eliminate) reliance on long-term credentials – Principle of least privilege – IAM
- Enable traceability– Integrate logs and metrics with systems to automatically respond and take
- Apply security at all layers– Like edge network, VPC, subnet, load balancer, every instance,
operating system, and application
- Automate security best practices
- Protect data in transit and at rest– Encryption, tokenization, and access control
- Keep people away from data– Reduce or eliminate the need for direct access or manual
processing of data
- Prepare for security events– Run incident response simulations and use tools with automation
to increase your speed for detection, investigation, and recovery
- Shared Responsibility Model
- Ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand, and mitigate disruptions such as misconfigurations or transient network issues
- Design Principles
- Test recovery procedures– Use automation to simulate different failures or to recreate
scenarios that led to failures before
- Automatically recover from failure– Anticipate and remediate failures before they occur
- Scale horizontally to increase aggregate system availability– Distribute requests across
multiple, smaller resources to ensure that they don’t share a common point of failure
- Stop guessing capacity– Maintain the optimal level to satisfy demand without over or
under provisioning – Use Auto Scaling
- Manage change in automation– Use automation to make changes to infrastructure.
4) Performance Efficiency
- Includes the ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve
- Design Principles
- Democratize advanced technologies– Advance technologies become services and hence you can focus more on product development
- Go global in minutes – Easy deployment in multiple regions
- Use serverless architectures– Avoid burden of managing servers
- Experiment more often– Easy to carry out comparative testing
- Mechanical sympathy– Be aware of all AWS services
5) Cost Optimization
- Includes the ability to run systems to deliver business value at the lowest price point.
- Design Principles
- Adopt a consumption mode– Pay only for what you use
- Measure overall efficiency– Use CloudWatch
- Stop spending money on data center operations– AWS does the infrastructure part and enables customer to focus on organization projects.
- Analyze and attribute expenditure– Accurate identification of system usage and costs, helps measure return on investment (ROI) – Make sure to use tags
- Use managed and application-level services to reduce cost of ownership– As managed services operate at cloud scale, they can offer a lower cost per transaction or service
- The sustainability pillar focuses on minimizing the environmental impacts of running cloud workloads.
- Design Principles
- Understand your impact– establish performance indicators, evaluate improvements
- Establish sustainability goals – Set long-term goals for each workload, model return on
- Maximize utilization– Right size each workload to maximize the energy efficiency of the
underlying hardware and minimize idle resources.
- Anticipate and adopt new, more efficient hardware and software offerings– and design for
flexibility to adopt new technologies over time.
- Use managed services– Shared services reduce the amount of infrastructure; Managed services
help automate sustainability best practices as moving infrequent accessed data to cold storage
and adjusting compute capacity.
- Reduce the downstream impact of your cloud workloads– Reduce the amount of energy or
resources required to use your services and reduce the need for your customers to upgrade
AWS Well-Architected Tool:
- Free tool to review your architectures against the 6 pillars Well-Architected Framework and adopt architectural best practices
- How does it work?
- Select your workload and answer questions
- Review your answers against the 6 pillars
- Obtain advice: get videos and documentations, generate a report, see the results in a dashboard
AWS Right Sizing:
- Right sizing is the process of matching instance types and sizes to your workload performance and capacity requirements at the lowest possible cost.
- Scaling up is easy so always start small
- It’s also the process of looking at deployed instances and identifying opportunities to eliminate or downsize without compromising capacity or other requirements, which results in lower costs.
AWS Ecosystem – Free resources
- AWS Blogs:
- AWS Forums (community):
- AWS Whitepapers & Guides:
- AWS Quick Starts:Automated, gold-standard deployments in the AWS Cloud. Build your production environment quickly with templates
- Example: WordPress on AWS. Leverages CloudFormation
- AWS Solutions:
- Vetted Technology Solutions for the AWS Cloud:Example – AWS Landing Zone: secure, multi-account AWS environment which is “Replaced” by AWS Control Tower
AWS Marketplace: Digital catalog with thousands of software listings from independent software vendors (3rd party).
- If you buy through the AWS Marketplace, it goes into your AWS bill
- You can sell your own solutionson the AWS Marketplace
- AWS Digital (online) and Classroom Training (in-person or virtual)
- AWS Private Training (for your organization)
- Training and Certification for the U.S Government
- Training and Certification for the Enterprise
- AWS Academy:helps universities teach AWS
AWS Professional Services & Partner Network
- The AWS Professional Services organization is a global team of experts
- They work alongside your team and a chosen member of the APN
- APN = AWS Partner Network
- APN Technology Partners:providing hardware, connectivity, and software
- APN Consulting Partners:professional services firm to help build on AWS
- APN Training Partners:find who can help you learn AWS
- AWS Competency Program:AWS Competencies are granted to APN Partners who have demonstrated technical proficiency and proven customer success in specialized solution areas.
- AWS Navigate Program:help Partners become better Partners
AWS Knowledge Center: • Contains the most frequent & common questions and requests
AWS Route 53 – Routing Policies:
Simple routing policy – Use for a single resource that performs a given function for your domain, for example, a web server that serves content for the example.com website. You can use simple routing to create records in a private hosted zone.
Failover routing policy – Use when you want to configure active-passive failover. You can use failover routing to create records in a private hosted zone.
Geolocation routing policy – Use when you want to route traffic based on the location of your users. You can use geolocation routing to create records in a private hosted zone.
Geoproximity routing policy – Use when you want to route traffic based on the location of your resources and, optionally, shift traffic from resources in one location to resources in another.
Latency routing policy – Use when you have resources in multiple AWS Regions, and you want to route traffic to the region that provides the best latency. You can use latency routing to create records in a private hosted zone.
IP-based routing policy – Use when you want to route traffic based on the location of your users, and have the IP addresses that the traffic originates from.
Multivalue answer routing policy – Use when you want Route 53 to respond to DNS queries with up to eight healthy records selected at random. You can use multivalue answer routing to create records in a private hosted zone.
Weighted routing policy – Use to route traffic to multiple resources in proportions that you specify. You can use weighted routing to create records in a private hosted zone.
AWS CloudTrail: Provides governance, compliance, and audit for your AWS Account. CloudTrail is enabled by default! Get an history of events / API calls made within your AWS Account. If a resource is deleted in AWS, investigate CloudTrail first!
CloudTrail Events: 1) Management Events: Operations that are performed on resources in your AWS account. 2) Data Events: By default, data events are not logged.
3) CloudTrail Insight Event: to detect unusual activity in your account like A) inaccurate resource provisioning B) hitting service limits C) Bursts of AWS IAM actions D) Gaps in periodic maintenance activity.
AWS CloudWatch: Enables real-time monitoring of logs. CloudWatch provides metrics for every service in AWS.
What is AWS service catalog?
AWS Service Catalog allows organizations to create and manage catalogs of IT services that are approved for use on AWS. These IT services can include everything from virtual machine images, servers, software, and databases to complete multi-tier application architectures.
What is the AWS Support Center?
AWS Support Center is the hub for managing your Support cases. The newly designed Support Center is moving to the AWS Management Console, providing both federated access support and an improved case management experience.
What is AWS License Manager?
AWS License Manager makes it easier to manage your software licenses from vendors such as Microsoft, SAP, Oracle, and IBM across AWS and on-premises environments. AWS License Manager lets administrators create customized licensing rules that mirror the terms of their licensing agreements.
AWS Systems Manager Parameter Store?
Parameter Store, a capability of AWS Systems Manager, provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, Amazon Machine Image (AMI) IDs, and license codes as parameter values. You can store values as plain text or encrypted data.
Agility: In the world of cloud computing, “Agility” refers to the ability to rapidly develop, test and launch software applications that drive business growth Another way to explain “Agility” – AWS provides a massive global cloud infrastructure that allows you to quickly innovate, experiment and iterate. Instead of waiting weeks or months for hardware, you can instantly deploy new applications. This ability is called Agility.
Elasticity – This refers to the ability to acquire resources as you need and release when they are no longer needed is termed as Elasticity of the Cloud.
Reliability – This refers to the ability of a system to recover from infrastructure or service disruptions, by dynamically acquiring computing resources to meet demand, and mitigate disruptions.
Scalability – Scalability is the measurement of a system’s ability to grow to accommodate an increase in demand or shrink down to a diminishing demand.
The AWS Well-Architected Tool helps you review the state of your workloads and compares them to the latest AWS architectural best practices. It is based on the 6 pillars of the Well-Architected Framework (Operational Excellence, Security, Reliability, Performance Efficiency, Cost Optimization and sustainability).
AWS Trusted Advisor is an online tool that provides you real time guidance to help you provision your resources following AWS best practices (Cost Optimization, Performance, Security, Fault Tolerance, and Service Limits).
Other AWS documents:
That’s all. All the best!!