Discover
AGPIAL A Good Person Is Always Learning.
81 Episodes
Reverse
Designing Event-Driven Systems Concepts and Patterns for Streaming Services with Apache Kafka Ben Stopford Foreword by Sam Newman
https://assets.confluent.io/m/7a91acf41502a75e/original/20180328-EB-Confluent_Designing_Event_Driven_Systems.pdf
While the main focus of this book is the building of event-driven systems of dif‐ ferent sizes, there is a deeper focus on software that spans many teams.
This is the realm of service-oriented architectures: an idea that arose around the start of the century, where a company reconfigures itself around shared services that do commonly useful things.
This idea became quite popular.
Amazon famously banned all intersystem com‐ munications by anything that wasn’t a service interface.
Later, upstart Netflix went all in on microservices, and many other web-based startups followed suit.
Enterprise companies did similar things, but often using messaging systems, which have a subtly different dynamic.
Much was learned during this time, and there was significant progress made, but it wasn’t straightforward.
One lesson learned, which was pretty ubiquitous at the time, was that service- based approaches significantly increased the probability of you getting paged at 3 a.m., when one or more services go down.
In hindsight, this shouldn’t have been surprising.
If you take a set of largely independent applications and turn them into a web of highly connected ones, it doesn’t take too much effort to imagine that one important but flaky service can have far-reaching implications, and in the worst case bring the whole system to a halt.
As Steve Yegge put it in his famous Amazon/Google post, “Organizing into services taught teams not to trust each other in most of the same ways they’re not supposed to trust external devel‐ opers.” What did work well for Amazon, though, was the element of organizational change that came from being wholeheartedly service based.
Service teams think of their software as being a cog in a far larger machine.
As Ian Robinson put it, “Be of the web, not behind the web.” This was a huge shift from the way people built applications previously, where intersystem communication was something teams reluctantly bolted on as an afterthought.
But the services model made
🎧 Audio/Video Book by: AGPIAL – A Good Person Is Always Learning (https://www.agpial.com/content/aviation/amtg/amtg_ch_01)📘 Chapter Title: Chapter 1 Safety, Ground Operations, & Servicing📚 Source: Aviation Maintenance Technician Handbook - General (30B)✍️ Author: Source Author---This chapter is part of the *AGPIAL Audio/Video Book* series, based on educational and public domain reference material.👤 This content is ideal for:- Independent learners and lifelong students- Anyone seeking to learn from authoritative reference material- Learners who prefer audio/video over traditional reading⏱️ Chapter Timestamps:- 00:00:00 – Chapter 1 Safety, Ground Operations, & Servicing- 00:01:03 – Shop Safety- 00:02:10 – Electrical Safety Physiological Safety- 00:03:33 – Fire Safety- 00:04:40 – Safety Around Compressed Gases- 00:06:05 – Safety Around Hazardous Materials- 00:07:41 – Safety Around Machine Tools- 00:10:06 – Flight Line Safety- 00:10:07 – Hearing Protection- 00:11:06 – Foreign Object Damage (FOD)- 00:11:59 – Safety Around Airplanes- 00:12:36 – Safety Around Helicopters- 00:13:30 – Fire Safety- 00:14:14 – Fire Protection- 00:14:16 – Requirements for Fire to Occur- 00:14:48 – Classification of Fires- 00:15:54 – Types and Operation of Shop and Flight Line Fire Extinguishers- 00:21:26 – Inspection of Fire Extinguishers- 00:22:04 – Identifying Fire Extinguishers- 00:23:21 – Using Fire Extinguishers- 00:23:41 – Tie-Down Procedures- 00:23:42 – Preparation of Aircraft- 00:24:10 – Tie-Down Procedures for Land Planes Securing Light Aircraft- 00:25:08 – Securing Heavy Aircraft- 00:26:13 – Tie-Down Procedures for Seaplanes- 00:27:02 – Tie-Down Procedures for Ski Planes- 00:28:01 – Tie-Down Procedures for Helicopters- 00:30:05 – Procedures for Securing Weight-Shift-Control- 00:30:32 – Procedures for Securing Powered Parachutes- 00:30:45 – Ground Movement of Aircraft- 00:30:47 – Engine Starting and Operation- 00:32:12 – Reciprocating Engines- 00:36:50 – Hand Cranking Engines- 00:40:17 – Extinguishing Engine Fires- 00:41:10 – Turboprop Engines- 00:44:52 – Turboprop Starting Procedures- 00:46:31 – Turbofan Engines- 00:47:52 – Starting a Turbofan Engine- 00:50:23 – Auxiliary Power Units (APUs)- 00:51:00 – Unsatisfactory Turbine Engine Starts- 00:51:02 – Hot Start- 00:51:22 – False or Hung Start- 00:51:41 – Engine Fails to Start- 00:52:16 – Towing of Aircraft- 00:58:15 – Taxiing Aircraft- 00:58:44 – Taxi Signals- 01:01:51 – Servicing Aircraft- 01:01:52 – Servicing Aircraft Air/Nitrogen Oil & Fluids- 01:03:44 – Ground Support Equipment Electric Ground Power Units- 01:06:03 – Hydraulic Ground Power Units- 01:07:23 – Ground Support Air Units- 01:07:45 – Ground Air Heating and Air Conditioning- 01:08:07 – Oxygen Servicing Equipment- 01:09:35 – Oxygen Hazards- 01:10:28 – Fuel Servicing of Aircraft- 01:10:29 – Types of Fuel and Identification- 01:12:14 – Contamination Control- 01:14:32 – Fueling Hazards- 01:15:38 – Fueling Procedures- 01:19:07 – Defueling🎓 Discover more audio/video content at: [https://www.agpial.com](https://www.agpial.com)#AGPIAL #Learning #AudioBook #VideoBook #Education #Aviation
Chapter Summary.
This chapter places emphasis on determining the airworthiness of the airplane, preflight visual inspection, managing risk and pilot- available resources, safe surface-based operations, and the adherence to and proper use of the AFM/POH and checklists.
The pilot should ensure that the airplane is in a safe condition for flight, and it meets all the regulatory requirements of 14 CFR part 91.
A pilot also needs to recognize that flight safety includes proper flight preparation and having the experience to manage the risks associated with the expected conditions.
An effective and continuous assessment and mitigation of the risks and appropriate utilization of resources goes a long way provided the pilot honestly evaluates their ability to act as PIC.
Chapter 2: Ground Operations
Introduction.
Experienced pilots place a strong emphasis on ground operations as this is where safe flight begins and ends.
They know that hasty ground operations diminish their margin of safety.
A smart pilot takes advantage of this phase of flight to assess various factors including the regulatory requirements, the pilot’s readiness for pilot-in-command (PIC) responsibilities, the airplane’s condition, the flight environment, and any external pressures that could lead to inadequate control of risk.
Flying an airplane presents many new responsibilities not required for other forms of transportation.
Focus is often placed on the flying portion itself with less emphasis placed on ground operations.
However pilots need to allow time for flight preparation.
Situational awareness begins during preparation and only ends when the airplane is safely and securely returned to its tie-down or hangar, or if a decision is made not to go.
This chapter covers the essential elements for the regulatory basis of flight including: 1.
An airplane’s airworthiness requirements, 2.
Important inspection items when conducting a preflight visual inspection, 3.
Managing risk and resources, and 4.
Proper and effective airplane surface movements using the AFM/POH and airplane checklists.
Preflight Assessment of the Aircraft.
The visual preflight assessment mitigates airplane flight hazards.
The preflight assessment ensures that any aircraft flown meets regulatory airworthiness standards and is in a safe mechanical condition prior to flight.
Per 14 CFR part 3, section 3.5(a), the term “airworthy” means that the aircraft conforms to its type design and is in condition for safe operation.
The owner/operator is primarily responsible for maintenance, but in accordance with 14 CFR part 91, section 91.7(a) and (b) no person may operate a civil aircraft unless it is in an airworthy condition and the pilot in command of a civil aircraft is responsible for determining whether the aircraft is in condition for safe flight.
The pilot's inspection should involve the following:
1. Inspecting the airplane’s airworthiness status.
2. Following the AFM/POH to determine the required items for visual inspection.
Chapter Summary.
This chapter discussed some of the concepts and goals of primary and intermediate flight training.
It identified and provided an explanation of regulatory requirements and the roles of the various entities involved.
It also offered recommended techniques to be practiced and refined to develop the knowledge, proficiency, and safe habits of a competent pilot.
Chapter 1: Introduction to Flight Training
Introduction.
The overall purpose of primary and intermediate flight training, as outlined in this handbook, is the acquisition and honing of basic airmanship skills.
Airmanship is a broad term that includes a sound knowledge of and experience with the principles of flight; the knowledge, experience, and ability to operate an aircraft with competence and precision both on the ground and in the air; and the application of sound judgment that results in optimal operational safety and efficiency.
Learning to fly an aircraft has often been compared to learning to drive an automobile.
This analogy is misleading.
Since aircraft operate in a three- dimensional environment, they require a depth of knowledge and type of motor skill development that is more sensitive to this situation, such as:
Coordination–the ability to use the hands and feet together subconsciously and in the proper relationship to produce desired results in the airplane.
Timing–the application of muscular coordination at the proper instant to make flight, and all maneuvers, a constant, smooth process.
Control touch–the ability to sense the action of the airplane and knowledge to determine its probable actions immediately regarding attitude and speed variations by sensing the varying pressures and resistance of the control surfaces transmitted through the flight controls.
Speed sense–the ability to sense and react to reasonable variations of airspeed.
An accomplished pilot demonstrates the knowledge and ability to:
Assess a situation quickly and accurately and determine the correct procedure to be followed under the existing circumstance.
Predict the probable results of a given set of circumstances or of a proposed procedure.
Exercise care and due regard for safety.
Accurately gauge the performance of the aircraft.
Recognize personal limitations and limitations of the aircraft and avoid exceeding them.
Identify, assess, and mitigate risk on an ongoing basis.
Welcome to the AGPIAL audiobook production of McKinsey and Company.
Technology, Media and Telecommunications Practice's.
The next software disruption: How vendors must adapt to a new era.
Please like and subscribe.
Over the turbulent past decade, many legacy software players proved to be remarkably resilient.
Now they must adopt a new strategic playbook to weather the different challenges ahead.
by Paul Roche, Jeremy Schneider, and Tejas Shah
Welcome to the AGPIAL audiobook production of, Beyond business continuity, Google Whitepaper.
Three IT strategies for navigating change By Praveen Rajasekar
Executive Summary
Business continuity done right is more than just backup and disaster recovery.
Today, business continuity means running IT services without disruption, ensuring compliance, and staying agile to respond to the unexpected.
By standardizing infrastructure and skills, strengthening reliability, and simplifying operations you can fortify your business continuity plans, improve operational flexibility, and enable digital agility across your organization.
00:00:14 Executive Summary
00:00:44 Introduction
00:05:10 Standardize skills
00:07:20 Strengthen reliability
00:09:22 Simplify operations
00:12:40 Conclusion
Executive Summary
Running your business in the cloud is good, but can running on multiple clouds be better?
Limiting yourself to a single cloud stack can come at a significant cost.
Instead of taking advantage of the unique capabilities of every cloud, you face the limitations of proprietary systems.
Rather than uncovering more insights with best-of-breed tools, siloed data and data gravity slow down your analysis.
Where there could be resilience that comes from entirely different systems, there is concentrated risk.
These are big tradeoffs to make in exchange for the simplicity of running on just one cloud.
To diversify their cloud strategy and avoid these limitations, many organizations (81% of enterprises surveyed by Gartner¹) have turned to multicloud and hybrid deployments.
If you’re thinking of going down this path, we want to partner with you on this journey.
Here are five reasons to partner with Google Cloud on your multicloud journey
Abstract
Cloud procurement presents an opportunity to reevaluate existing procurement strategies so you can create a flexible acquisition process that enables your public sector organization to extract the full benefits of the cloud.
Cloud procurement considerations are key components that can form the basis of a broader public sector cloud procurement strategy.
This paper presents the top 10 cloud procurement considerations for the public sector.
00:00:14 Abstract
00:00:40 Introduction
00:01:21 Cloud procurement considerations
00:01:43 Understand why cloud computing is different
00:02:45 Plan early to extract the full benefit of the cloud
00:03:30 Avoid overly prescriptive requirements
00:05:00 Separate cloud infrastructure (unmanaged
00:05:03 services) from managed services
00:05:52 Incorporate a utility pricing model
00:07:25 Leverage third-party accreditations for security,
00:07:28 privacy, and auditing
00:09:17 Understand that security is a shared responsibility
00:10:04 Design and implement cloud data governance
00:11:06 Specify commercial item terms
00:11:54 Define cloud evaluation criteria
00:12:37 Conclusion
Introduction
As more and more enterprises look at leveraging the capabilities of public clouds, they face an array of important decisions.
For example, they must decide which cloud(s) and what technologies they should use, how they operate and manage resources, and how they deploy applications.
A lot of technologies can help, but not all of them are equal.
If you have heavily invested time, money, and energy in creating software, shouldn’t you have the ability to deploy and manage that software seamlessly across your hybrid environments and avoid the costs of rewriting it?
How can you scale your software to meet customer demand?
Do you want to deploy your software where it makes sense, whether on-premises or to a specific public cloud, based on business value?
This white paper discusses how Kubernetes is the answer to your hybrid cloud strategy and how it provides a holistic solution that simplifies your deployment, management, and operational concerns.
The paper also provides links to additional resources that can help you refine your hybrid cloud strategy.
00:00:19 Introduction
00:01:22 Kubernetes: What is it?
00:01:54 Problems of the past
00:02:51 What a hybrid strategy offers
00:04:12 Kubernetes and Google
00:06:19 Bringing it all together
00:07:37 Conclusion
Unpicking Vendor Lock-in.
A guide to understanding and mitigating switching costs when changing your Cloud Services Provider.
Introduction
Customers should be able to switch their Cloud Services Provider (CSP) if they wish.
A CSP, or vendor, should earn customer business by providing the best services and capabilities at the best price.
If a CSP makes it difficult to switch away from them (the essential element of vendor lock-in), it suggests that their services are not earning customer trust through the value they bring, and that they are restricting customer choice.
At Amazon Web Services (AWS), we provide customers with full control, ownership, and portability of their data, and allow customers to quickly move to another CSP should they choose to.
We never want to trap customers with lock-in tactics such as fixed-price, mandatory long-term contracts, or technical hurdles to changing CSP that amount to vendor lock-in.
We want customers to stay with us because we offer the broadest choice of the best cloud services.
Our outlook is that our customers are loyal to us right up until the moment that somebody else offers them a better service.
This drives our customer- obsessed approach to innovation, and ensures we earn customer trust on a continuous basis.
This whitepaper looks at what customers should require from CSPs so they have the freedom to choose the innovative services they need, coupled with the ability to turn things off and move — should they decide to do so.
It provides a practical approach to defining, understanding, and eliminating sources of vendor lock-in.
The commentary in this paper is based on AWS’s many years of experience in delivering a secure cloud infrastructure to millions of customers worldwide.
Abstract This whitepaper provides guidance and options for running Docker on AWS. Docker is an open platform for developing, shipping, and running applications in a loosely isolated environment called a container. Amazon Web Services (AWS) is a natural complement to containers and offers a wide range of scalable infrastructure services upon which containers can be deployed. You will find various options such as AWS Elastic Beanstalk, Amazon Elastic Container Service (Amazon ECS), Amazon Elastic Kubernetes Service (Amazon EKS), AWS Fargate, and AWS App Runner. This paper cover details of each option and key components of the container orchestration.
Abstract
Data engineers, data analysts, and big data developers are looking to evolve their analytics from batch to real-time so their companies can learn about what their customers, applications, and products are doing right now and react promptly.
This whitepaper discusses the evolution of analytics from batch to real-time.
It describes how services such as Amazon Kinesis Streams, Amazon Kinesis Firehose, and Amazon Kinesis Analytics can be used to implement real- time applications, and provides common design patterns using these services.
Introduction
Businesses today receive data at massive scale and speed due to the explosive growth of data sources that continuously generate streams of data.
Whether it is log data from application servers, clickstream data from websites and mobile apps, or telemetry data from Internet of Things (IoT) devices, it all contains information that can help you learn about what your customers, applications, and products are doing right now.
Having the ability to process and analyze this data in real-time is essential to do things such as continuously monitor your applications to ensure high service uptime and personalize promotional offers and product recommendations.
Real-time processing can also make other common use cases, such as website analytics and machine learning, more accurate and actionable by making data available to these applications in seconds or minutes instead of hours or days.
<mark name="Real-time Application Scenarios"/>Real-time Application Scenarios
There are two types of use case scenarios for streaming data applications:
Evolving from Batch to Streaming Analytics You can perform real-time analytics on data that has been traditionally analyzed using batch processing in data warehouses or using Hadoop frameworks.
The most common use cases in this category include data lakes, data science, and machine learning.
You can use streaming data solutions to continuously load real-time data into your data lakes.
You can also update machine learning models more frequently as new data becomes available, ensuring accuracy and reliability of the outputs.
For example, Zillow uses Amazon Kinesis Streams to collect public record data and MLS listings, and then provide home buyers and sellers with the most up-to-date home value estimates in near real-time.
Zillow also sends the same data to its Amazon Simple Storage Service (S3) data lake using Kinesis Streams so that all the applications work with the most recent information.
Building Real-Time Applications You can use streaming data services for real-time applications such as application monitoring, fraud detection, and live leaderboards.
These use cases require millisecond end-to-end latencies—from ingestion, to processing, all the way to emitting the results to target data stores and other systems.
Welcome to the AGPIAL audiobook production of
McKinsey Digital's.
How enterprise architects need to evolve to survive in a digital world.
Enterprise architects still have an important role to play at large incumbents, but they need to evolve in three ways.
by Oliver Bossert and Niels van der Wildt
If you like this type of content please like and subscribe.
Thank you.
Many CIOs at large incumbents have made a startling discovery about digital natives: those businesses often don’t have architects or at least anyone with the formal title of “enterprise architect.”
With CIOs increasingly moving their organizations to an agile DevOps operating model, that discovery has prompted much questioning about whether they still need architects, and if so, what they should be doing.
While incumbents can learn plenty from digital natives and adopt many of their practices, eliminating the architect role shouldn’t be one of them.
That’s because digital natives have the benefits of a highly skilled and experienced workforce operating in a start-up culture on a modern architecture with few legacy issues.
Development teams in most incumbent organizations, however, don’t enjoy those benefits.
They are used to workarounds such as creating direct point-to-point connections because, for decades, that’s been the only way to get things done.
The reality is that most organizations still need architects.
About this Guide
For many customers, migrating to Amazon EMR raises many questions about assessment, planning, architectural choices, and how to meet the many requirements of moving analytics applications like Apache Spark and Apache Hadoop from on-premises data centers to a new AWS Cloud environment.
Many customers have concerns about the viability of distribution vendors or a purely open-source software approach, and they need practical advice about making a change.
This guide includes the overall steps of migration and provides best practices that we have accumulated to help customers with their migration journey.
Overview
Businesses worldwide are discovering the power of new big data processing and analytics frameworks like Apache Hadoop and Apache Spark, but they are also discovering some of the challenges of operating these technologies in on-premises data lake environments.
Not least, many customers need a safe long-term choice of platform as the big data industry is rapidly changing and some vendors are now struggling.
Common problems include a lack of agility, excessive costs, and administrative headaches, as IT organizations wrestle with the effort of provisioning resources, handling uneven workloads at large scale, and keeping up with the pace of rapidly changing, community-driven, open-source software innovation.
Many big data initiatives suffer from the delay and burden of evaluating, selecting, purchasing, receiving, deploying, integrating, provisioning, patching, maintaining, upgrading, and supporting the underlying hardware and software infrastructure.
A subtler, if equally critical, problem is the way companies’ data center deployments of Apache Hadoop and Apache Spark directly tie together the compute and storage resources in the same servers, creating an inflexible model where they must scale in lock step.
This means that almost any on-premises environment pays for high amounts of under-used disk capacity, processing power, or system memory, as each workload has different requirements for these components.
How can smart businesses find success with their big data initiatives?
Migrating big data (and machine learning) to the cloud offers many advantages.
Cloud infrastructure service providers, such as Amazon Web Services (AWS), offer a broad choice of on-demand and elastic compute resources, resilient and inexpensive persistent storage, and managed services that provide up-to-date, familiar environments to develop and operate big data applications.
Data engineers, developers, data scientists, and IT personnel can focus their efforts on preparing data and extracting valuable insights.
Services like Amazon EMR, AWS Glue, and Amazon S3 enable you to decouple and scale your compute and storage independently, while providing an integrated, well- managed, highly resilient environment, immediately reducing so many of the problems of on-premises approaches.
This approach leads to faster, more agile, easier to use, and more cost-efficient big data and data lake initiatives.
Building a Scalable and Secure Multi-VPC AWS Network Infrastructure AWS Whitepaper. AGPIAL Audiobook Abstract AWS customers often rely on hundreds of accounts and VPCs to segment their workloads and expand their footprint. This level of scale often creates challenges around resource sharing, inter-VPC connectivity, and on-premises to VPC connectivity. This whitepaper describes best practices for creating scalable and secure network architectures in a large network using AWS services like Amazon VPC, AWS Transit Gateway, AWS PrivateLink, and AWS Direct Connect Gateway. It demonstrates solutions for managing growing infrastructure — ensuring scalability, high availability, and security while keeping overhead costs low.
AWS Security Incident Response Guide
This guide presents an overview of the fundamentals of responding to security incidents within a customer’s AWS Cloud environment.
It focuses on an overview of cloud security and incident response concepts, and identifies cloud capabilities, services, and mechanisms that are available to customers who are responding to security issues.
This paper is intended for those in technical roles and assumes that you are familiar with the general principles of information security, have a basic understanding of incident response in your current on- premises environments, and have some familiarity with cloud services.
Introduction
Security is the highest priority at AWS.
As an AWS customer, you benefit from a data center and network architecture that is built to meet the requirements of the most security-sensitive organizations.
The AWS Cloud has a shared responsibility model.
AWS manages security of the cloud.
You are responsible for security in the cloud.
This means that you retain control of the security you choose to implement.
You have access to hundreds of tools and services to help you meet your security objectives.
These capabilities help you establish a security baseline that meets your objectives for your applications running in the cloud.
When a deviation from your baseline does occur (such as by a misconfiguration), you may need to respond and investigate.
To successfully do so, you must understand the basic concepts of security incident response within your AWS environment, as well as the issues you need to consider to prepare, educate, and train your cloud teams before security issues occur.
It is important to know which controls and capabilities you can use, to review topical examples for resolving potential concerns, and to identify remediation methods that you can use to leverage automation and improve your response speed.
Because security incident response can be a complex topic, we encourage you to start small, develop runbooks, leverage basic capabilities, and create an initial library of incident response mechanisms to iterate from and improve upon.
This initial work should include your legal department as well as teams that are not involved with security, so that you are better able to understand the impact that incident response (IR), and the choices you have made, have on your corporate goals.
<mark name="Before You Begin"/>Before You Begin
In addition to this document, we encourage you to review the Best Practices for Security, Identity, & Compliance and the Security Perspective of the AWS Cloud Adoption Framework (CAF) whitepaper.
The AWS CAF provides guidance that supports coordinating between the different parts of organizations that are moving to the cloud.
The CAF guidance is divided into several areas of focus that are relevant to implementing cloud-based IT systems, which we refer to as perspectives.
The Security Perspective describes how to implement a security program across several workstreams, one of which focuses on incident response.
This document details some of our experiences in helping customers to assess and implement successful mechanisms in that workstream.
<mark name="AWS CAF Security Perspective"/>AWS CAF Security Perspective
The Security Perspective includes four components:
Directive controls establish the governance, risk, and compliance models within which the environment operates.
Preventive controls protect your workloads and mitigate threats and vulnerabilities.
Detective controls provide full visibility and transparency over the operation of your deployments in AWS.
Responsive controls drive remediation of potential deviations from your security baselines.
AWS Key Management Service Best Practices AWS Key Management Service Best Practices
Abstract
AWS Key Management Service (AWS KMS) is a managed service that allows you to concentrate on the cryptographic needs of your applications while Amazon Web Services (AWS) manages availability, physical security, logical access control, and maintenance of the underlying infrastructure.
Further, AWS KMS allows you to audit usage of your keys by providing logs of all API calls made on them to help you meet compliance and regulatory requirements.
Customers want to know how to effectively implement AWS KMS in their environment.
This whitepaper discusses how to use AWS KMS for each capability described in the AWS Cloud Adoption Framework (CAF) Security Perspective whitepaper, including the differences between the different types of customer master keys, using AWS KMS key policies to ensure least privilege, auditing the use of the keys, and listing some use cases that work to protect sensitive information within AWS.
AWS Security at Scale: Logging in AWS How AWS CloudTrail can help you achieve compliance by logging API calls and changes to resources. AGPIAL Audiobook Abstract The logging and monitoring of API calls are key components in security and operational best practices, as well as requirements for industry and regulatory compliance. AWS CloudTrail is a web service that records API calls to supported AWS services in your AWS account and delivers a log file to your Amazon Simple Storage Service (Amazon S3) bucket. AWS CloudTrail alleviates common challenges experienced in an on-premise environment and in addition to making it easier for you to demonstrate compliance with policies or regulatory standards, the service makes it easier for you to enhance your security and operational processes. This paper provides an overview of common compliance requirements related to logging and details how AWS CloudTrail features can help satisfy these requirements. There is no additional charge for AWS CloudTrail, aside from standard charges for S3 for log storage and SNS usage for optional notification.
Abstract
This whitepaper is intended for existing and potential customers who are designing the security infrastructure and configuration for applications running in Amazon Web Services (AWS). It provides security best practices that will help you define your Information Security Management System (ISMS) and build a set of security policies and processes for your organization so you can protect your data and assets in the AWS Cloud. The whitepaper also provides an overview of different security topics such as identifying, categorizing and protecting your assets on AWS, managing access to AWS resources using accounts, users and groups and suggesting ways you can secure your data, your operating systems and applications and overall infrastructure in the cloud. The paper is targeted at IT decision makers and security personnel and assumes that you are familiar with basic security concepts in the area of networking, operating systems, data encryption, and operational controls.
Overview
Information security is of paramount importance to Amazon Web Services (AWS) customers. Security is a core functional requirement that protects mission- critical information from accidental or deliberate theft, leakage, integrity compromise, and deletion. Under the AWS shared responsibility model, AWS provides a global secure infrastructure and foundation compute, storage, networking and database services, as well as higher level services. AWS provides a range of security services and features that AWS customers can use to secure their assets. AWS customers are responsible for protecting the confidentiality, integrity, and availability of their data in the cloud, and for meeting specific business requirements for information protection.
Abstract
The focus of this paper is the security pillar of the AWS Well-Architected Framework.
It provides guidance to help you apply best practices, current recommendations in the design, delivery, and maintenance of secure AWS workloads.
ntroduction
The AWS Well-Architected Framework helps you understand trade-offs for decisions you make while building workloads on AWS.
By using the Framework, you will learn current architectural best practices for designing and operating reliable, secure, efficient, and cost-effective workloads in the cloud.
It provides a way for you to consistently measure your workload against best practices and identify areas for improvement.
We believe that having well-architected workloads greatly increases the likelihood of business success.
The framework is based on five pillars:
Operational Excellence
Security
Reliability
Performance Efficiency
Cost Optimization This paper focuses on the security pillar.
This will help you meet your business and regulatory requirements by following current AWS recommendations.
It’s intended for those in technology roles, such as chief technology officers (CTOs), chief information security officers (CSOs/CISOs), architects, developers, and operations team members.
After reading this paper, you will understand AWS current recommendations and strategies to use when designing cloud architectures with security in mind.
This paper doesn’t provide implementation details or architectural patterns but does include references to appropriate resources for this information.
By adopting the practices in this paper, you can build architectures that protect your data and systems, control access, and respond automatically to security events.























