Healthcare organizations are rapidly adopting container technology to drive innovation. In this session, join Horizon Blue Cross Blue Shield of New Jersey and ClearDATA to learn about how to integrate Amazon ECS into your deployment pipeline while maintaining compliance for healthcare workloads, how to harden container environments for sensitive workloads, and how to leverage AWS tooling and microservices to provide new views and analysis for data stored in on-premises data centers.
AWS is an elastic, secure, flexible, and developer-centric ecosystem that serves as an ideal platform for Docker deployments. AWS offers the scalable infrastructure, APIs, and SDKs that integrate tightly into a development lifecycle and accentuate the benefits of the lightweight and portable containers that Docker offers. In this session, you learn the benefits of containers, learn about the Amazon EC2 Container Service, and understand how to use Amazon ECS to run containerized applications at scale in production.
The Bank of Nova Scotia is using deep learning to improve the way it manages payments collections for its millions of credit card customers. In this session, we will show how the Bank of Nova Scotia leveraged Amazon EC2 Container Service and EC2 Container Registry and Docker to streamline their deployment pipeline. We will also cover how the bank used AWS IAM and Amazon S3 for asset management and security, as well as AWS GPU accelerated instances and TensorFlow to develop a retail risk model. We will conclude the session by examining how the Bank of Nova Scotia was able to dramatically cut costs in comparison to on-premise development.
Learn how Mapbox improved and leveled up their Amazon ECS monitoring by using Amazon CloudWatch Events and custom metrics. We cover the events that kick off data collection, which enables our team to track the trillions of compute seconds happening every day on Mapbox’s ECS clusters. The result of the data collection includes custom metrics and alarms used to inform stakeholders across Mapbox about detailed ECS usage, so development teams and finance alike can easily put a price tag on each container.
Cisco’s video solutions were historically designed for on-premises dedicated hardware deployments. Typically, major releases occurred annually or bi-annually. The release process lacked the ability to absorb frequent changes and adapt to rapid market trends. This session looks into how Cisco’s IVP Solution team evolved a production system from its monolithic design into a microservices platform, leveraging cloud services, automated deployments, and delivery pipelines. Through this transition the team adopted a biweekly deployment cadence. This ultimately enabled a fast-paced migration to an AWS environment, using AWS services such as Amazon EC2, Amazon RDS, and Amazon Elasticsearch Service.
This session covers how the team at Ubisoft evolved For Honor's infrastructure using Amazon ECS and supporting systems (Amazon CloudFront, Amazon ElastiCache, Amazon Elasticsearch Service, Amazon SQS, and AWS Lambda, with monitoring through DataDog) from a proof of concept to an infrastructure as code solution. The team shares war stories about supporting both internal and live environments, and the challenges of bridging cloud and on-premises systems.
While organizations gain agility and scalability when they migrate to containers and microservices, they also benefit from compliance and security, advantages that are often overlooked. In this session, Kelvin Zhu, lead software engineer at Okta, joins Mitch Beaumont, enterprise solutions architect at AWS, to discuss security best practices for containerized infrastructure. Learn how Okta built their development workflow with an emphasis on security through testing and automation. Dive deep into how containers enable automated security and compliance checks throughout the development lifecycle. Also understand best practices for implementing AWS security and secrets management services for any containerized service architecture.
As your application’s infrastructure grows and scales, well-managed container scheduling is critical to ensuring high-availability and resource optimization. In this session, we will deep dive into the challenges and opportunities around container scheduling, as well as the different tools available within Amazon ECS and AWS to carry out efficient container scheduling. We will discuss patterns for container scheduling available with Amazon ECS and the Blox scheduling framework
Scaling a microservice-based infrastructure can be challenging in terms of both technical implementation and developer workflow. In this talk, AWS Solutions Architect Pierre Steckmeyer will be joined by Will McCutchen, Architect at BuzzFeed, to discuss Amazon ECS as a platform for building a robust infrastructure for microservices. We will look at the key attributes of microservice architectures and how Amazon ECS supports these requirements in production, from configuration to sophisticated workload scheduling to networking capabilities to resource optimization. We will also examine what it takes to build an end-to-end platform on top of the wider AWS ecosystem, and what it's like to migrate a large engineering organization from a monolithic approach to microservices.
Deep dive into how Amazon ECS can enable secure, natively addressable, and highly performant network interfaces for containers using the recently launched the awsvpc task networking mode. In this session, we focus on how CNI plugins were integrated with the Amazon ECS container agent and discuss the backend changes necessary to enable elastic network interface provisioning for tasks. Shakeel Sorathia, VP of engineering at FOX Digital, discusses best practices for working with Amazon ECS to enable such use cases as network isolation and IP-based routing for service discovery.
If you ask 10 teams why they migrated to containers, you will likely get answers like ‘developer productivity’, ‘cost reduction’, and ‘faster scaling’. But teams often find there are several other ‘hidden’ benefits to using containers for their services. In this talk, Franziska Schmidt, Platform Engineer at Mapbox and Yaniv Donenfeld from AWS will discuss the obvious, and not so obvious benefits of moving to containerized architecture. These include using Docker and ECS to achieve shared libraries for dev teams, separating private infrastructure from shareable code, and making it easier for non-ops engineers to run services.
Amazon Fargate Container Mode makes running containerized workloads on AWS easier than ever before. This session will provide a technical background for implementing Fargate Container Mode with your existing containerized services, including best practices for building images, configuring task definitions, task networking, secrets management, and monitoring. See the demo used in this presentation: https://github.com/awslabs/eb-java-scorekeep/tree/fargate
As containers become more embedded in the platform tools, debug tools, traces and logs become increasingly important. Nare Hayrapetyan, Senior Software Engineer and Calvin French-Owen, Senior Technical Officer for Segment will discuss the principals of monitoring and debugging containers and the tools Segment has implemented and built for logging, alerting, metric collection, and debugging of containerized services running on Amazon ECS.
If you've ever considered moving part of your application stack to containers, don’t miss this session. Amazon ECS Software Engineer Uttara Sridhar will cover best practices for containerizing your code, implementing automated service scaling and monitoring, and setting up automated CI/CD pipelines with fail-safe deployments. Manjeeva Silva and Thilina Gunasinghe will show how McDonalds implemented their home delivery platform in four months using Docker containers and Amazon ECS to serve tens of thousands of customers.
Image recognition is a field of deep learning that uses neural networks to recognize the subject and traits for a given image. In Japan, Cookpad uses Amazon ECS to run an image recognition platform on clusters of GPU-enabled EC2 instances. In this session, hear from Cookpad about the challenges they faced building and scaling this advanced, user-friendly service to ensure high-availability and low-latency for tens of millions of users.
A lot of progress has been made on how to bootstrap a cluster since Kubernetes' first commit and is now only a matter of minutes to go from zero to a running cluster on Amazon Web Services. However, evolving a simple Kubernetes architecture to be ready for production in a large enterprise can quickly become overwhelming with options for configuration and customization. In this session, Arun Gupta, Open Source Strategist for AWS and Raffaele Di Fazio, software engineer at leading European fashion platform Zalando, will show common practices for running Kubernetes on AWS and share insights from experience in operating tens of Kubernetes clusters in production on AWS. We will cover options and recommendations on how to install and manage clusters, configure high availability, perform rolling upgrades and handle disaster recovery, as well as continuous integration and deployment of applications, logging, and security.
Sick of getting paged at 2am and wondering "where did all my disk space go?" New Docker users often start with a stock image in order to get up and running quickly, but this can cause problems as your application matures and scales. Creating efficient container images is important to maximize resources, and deliver critical security benefits. In this session, AWS Sr. Technical Evangelist Abby Fuller will cover how to create effective images to run containers in production. This includes an in-depth discussion of how Docker image layers work, things you should think about when creating your images, working with Amazon EC2 Container Registry, and mise-en-place for install dependencies. Prakash Janakiraman, Co-Founder and Chief Architect at Nextdoor will discuss high-level and language specific best practices for with building images and how Nextdoor uses these practices to successfully scale their containerized services with a small team.
Batch processing is useful to analyze large amounts of data. But configuring and scaling a cluster of virtual machines to process complex batch jobs can be difficult. In this talk, we'll show how to use containers on AWS for batch processing jobs that can scale quickly and cost-effectively. We will also discuss AWS Batch, our fully managed batch-processing service. You'll also hear from GoPro and Here about how they use AWS to run batch processing jobs at scale including best practices for ensuring efficient scheduling, fine-grained monitoring, compute resource automatic scaling, and security for your batch jobs.
Containers can make it easier to scale applications in the cloud, but how do you set up your CI/CD workflow to automatically test and deploy code to containerized apps? In this session, we explore how developers can build effective CI/CD workflows to manage their containerized code deployments on AWS. Ajit Zadgaonkar, director of engineering and operations at Edmunds, walks through best practices for CI/CD architectures used by his team to deploy containers. We also deep dive into topics such as how to create an accessible CI/CD platform and architect for safe Blue-Green deployments.
A preview of the new managed-Kubernetes service on AWS.