DiscoverGoogle Cloud Platform Podcast
Google Cloud Platform Podcast
Claim Ownership

Google Cloud Platform Podcast

Author: Google Cloud Platform

Subscribed: 10,070Played: 135,881


The Google Cloud Platform Podcast, coming to you every week. Discussing everything on Google Cloud Platform from App Engine to Big Query.
263 Episodes
Max Saltonstall and Carter Morgan co-host the podcast this week and talk APIs with our guests, Dave Feuer and Benjamin Schuler. Apigee, an API management platform that is a part of Google Cloud, focuses on all steps of the digital product life cycle to make API management easy for clients. The software company SAP provides data storage and other business support for different types of companies across the world. Together, Apigee and SAP allow data to be collected, stored, organized, and securely accessed and shared with other applications. The shift to e-commerce and the desire for tailored experiences has driven the need for more API usage and therefore better API management. SAP and Apigee, with their myriad features, allow businesses to keep up with these increasing demands efficiently. We hear examples of how companies are leveraging these tools and use cases where the power of SAP and Apigee benefit customers most. Our guests describe the developer experience as well. We talk about the process of creating a project with both SAP and Apigee and why both tools working together makes the developer’s job easier. Planning your project with an “API first” mindset means choosing APIs and SAP software early in the planning process to better align your project with your business goals. Apigee can help you manage these APIs securely, letting you choose the data that is shared. The use of both SAP and Apigee helps companies to realize long-term efficiency and streamlined operations as development becomes easier with each additional API. Benjamin Schuler Benjamin Schuler is a Solution Manager for SAP at Google Cloud with a focus on topics around application modernization. Prior to joining Google, he was working directly for SAP’s consulting unit and helped companies move parts of their SAP landscape to the cloud. When he is not busy populating spreadsheets or adding yet another //TODO: to his demo apps, he likes to get out onto the water for some freeride kitesurfing. Dave Feuer Dave Feuer is Senior Product Manager at Apigee, a part of Google Cloud Platform. Previously, Dave ran the Platforms & Strategies practice at a boutique consulting firm, designing and implementing developer programs for Fortune 100 companies. Prior to that, Dave ran enterprise telecommunications product development and software engineering at IDT and Net2Phone, a telecommunications and payments company. Dave started his career as an embedded software development engineer, and frequently questions how he ended up spending so much time in Google Slides. Cool things of the week AI Simplified: Managing ML data sets with Vertex AI blog Create your own journaling app without writing code blog AppSheet Journal site Interview Apigee site Apigee Setup site SAP site Apigee: Your gateway to more manageable APIs for SAP blog Accelerate the time to value of your SAP data with Apigee video GCP Podcast Episode 54: API Lifecycle with Alan Ho podcast GCP Podcast Episode 219: Spotify with Josh Brown podcast Conrad Electronic: Powering next-gen retail with BigQuery and Apigee API management site Schlumberger chooses GCP to deliver new oil and gas technology platform blog Schlumberger Selects Google Cloud for its Enterprise-Wide SAP Migration and Modernization site What’s something cool you’re working on? Max is documenting how Google & Alphabet made the move to SAP. He’s also working on a Discord bot on Google Cloud and ITRP series launch. Carter is working on a SAP content video series and teaching in the Equity Through Technology program.
Stephanie Wong and cohost Gabi Ferrara talk about the exciting launch of Database Migration Service at Google. Our guests this week, Shachar Guz and Gabe Weiss, start the show explaining DMS, focusing on the ease of infrastructure management for cloud users. Migration is made simpler with DMS, and Shachar and Gabe walk us through the process of using this powerful new service. Our guests outline some hurdles to migration and how DMS and the DMS documentation help developers overcome them. Shacher tells us the steps companies should take before and after running DMS to ensure projects run correctly and business logic is preserved as well, and Gabe stresses the importance of testing. Database Migration Service focuses on open source, and we talk about why this is an important benefit. In addition, the thorough explanations embedded in DMS help users navigate easily, serverless technology means projects are fast and efficient, and native applications are leveraged for better transparency. And it’s free. Shachar Guz Shachar is a product manager at Google Cloud, he works on the Cloud Database Migration Service. Shachar worked in various product and engineering roles and shares a true passion about data and helping customers get the most out of their data. Shachar is passionate about building products that make cumbersome processes simple and straightforward and helping companies adopt Cloud technologies to accelerate their business. Gabe Weiss Gabe works on the Google Cloud Platform team ensuring that developers can make awesome things, both inside and outside of Google. Prior to Google he’s worked in virtual reality production and distribution, source control, the games industry and professional acting. Cool things of the week Unlock the power of change data capture and replication with new, serverless Datastream blog Introducing Dataplex—an intelligent data fabric for analytics at scale blog Data Cloud Summit site Google Cloud’s New 2021 Analytics Launches video Bringing multi-cloud analytics to your data with BigQuery Omni blog Applied ML Summit site Interview Database Migration Service site DMS Documentation docs Cloud SQL site Network Intelligence Center site Introducing Database Migration Service video Best practices for homogeneous database migrations blog Database Migration Service Connectivity—A technical introspective blog Migrating MySQL data to Cloud SQL using Database Migration Service Qwiklab site What’s something cool you’re working on? Gabbi is going to CrimeCon for fun!
On the podcast this week, we’re diving into what full stack development looks like on Google Cloud. Guests Tony Pujals and Kevin Moore join your hosts Stephanie Wong and Grant Timmerman to help us understand how developers can leverage Dart and Google Cloud to create powerful and effective front end and back end systems for their projects. Kevin takes us through the evolution of Dart and Flutter and how they have become a way to allow developers an experience-first solution. Developers can focus on the experience they want to create, then decide which platforms to run on. With Dart, Google provides business logic that allows developers to provide the front end and back end experience for users in one programming language. Our guests talk about the types of projects that will benefit most from the use of Dart and how Dart is expanding to offer more features and better usability. Flutter offers a high fidelity, rich framework that supports mobile and can be deployed on any platform. When paired with Dart on Docker Hub, developers can easily build optimized front and back end systems. Tony and Kevin tell us about the new Functions Framework for Dart and how it helps developers handle deploying to serverless technologies. We hear more about how Dart, Flutter, and Cloud Run working together can make any project easy to build and easy to deploy and use. Tony Pujals Tony is a career engineer who’s now on the serverless developer relations team and focused on helping full stack developers succeed building their app backends. Kevin Moore Kevin is the Product Manager of Dart & Flutter at Google. Cool things of the week What is Vertex AI? Developer advocates share more blog Google Cloud launches from Google I/O 2021 blog Secure and reliable development with Go | Q&A video Google CloudEvents - Go site Interview Flutter site Dart site Go site Datastore site Dart on Docker site Functions Framework for Dart on GitHub site Cloud Run site Dart Documentation docs Google APIs with Dart docs App Engine site Dart Functions Qwiklab site Flutter Startup Namer Qwiklab site Cloud, Dart, and full-stack Flutter | Q&A video Go full-stack with Kotlin or Dart on Google Cloud | Session video What’s something cool you’re working on? Grant has been working on libraries for CloudEvents.
Stephanie Wong and Priyanka Vergadia host the podcast this week as we talk responsible AI with guests Craig Wiley and Tracy Frey. Vertex AI, the newly released AI platform from Google, is where Craig starts, telling us that it helps seamlessly integrate AI best practices into AI projects. When designing and building machine learning projects, it’s important to plan and integrate functions that support a responsible model as well. Tracy and Craig help us understand the process of designing and building these responsible, efficient projects, from problem identification and data set collection and refinement to ethical model considerations and finally project construction. Part of Responsible AI is considering all the stakeholders of a project and how they will be impacted. Through examples, Tracy demonstrates how businesses can decide if the software solution affects stakeholders in a way the business would be proud of. Starting in the planning stages and continuing through data collection and model training, companies employing responsible AI techniques will consider input from groups that may use or be affected by the model, from social scientist who specialize in human behavior, and others. Craig elaborates on these principles in the context of Vertex AI and how the time savings of Vertex could be used to make thoughtful, responsible AI decisions. Craig teaches us more about Vertex as we wrap up the interview. Its ability to analyze data and perform ongoing model monitoring make for richer, more accurate projects. Tracy talks about the future of Responsible AI and how the marriage of tech and humanity will continue to produce ethical, effective AI projects. Craig Wiley Craig is the Director of Product for Google Cloud’s AI Platform. Previous to Google, Craig spent nine years at Amazon as the General Manager of Amazon SageMaker, AWS’ machine learning platform as well as in Amazon’s 3rd Party Seller Business. Craig has a deep belief in democratizing the power of data, and he pushes to improve the tooling for experienced users while seeking to simplify it for the growing set of less experienced users. Outside of work he enjoys spending time with his family, eating delicious meals, and enthusiastically struggling through small home improvement projects. Tracy Frey Tracy Frey is Google Cloud AI & Industry Solution’s Managing Director of Outbound Product Management, Incubation and Responsible AI and is dedicated to ensuring Google Cloud AI & Industry Solutions is responsible, thoughtful, and collaborative as it continues to advance artificial intelligence and machine learning. She has been at Google for more than 10 years where she has worked on many different products and areas. Before joining Google she worked at multiple early-stage tech startups where she held multiple functions including product management, developer relations, product marketing, business development and strategy. Prior to her life in tech she taught children traditional wilderness survival skills, taught in a traditional classroom, studied private reserves in Costa Rica and has been a professional hip hop dancer. Cool things of the week Cloud CISO Perspectives: May 2021 blog The cloud developer’s guide to Google I/O 2021 blog Interview Vertex AI site Responsible AI site Staying ahead of the curve – The business case for responsible AI article Building responsible AI for everyone site Cloud Storage site BigQuery site Data Cloud Summit site Applied ML Summit site GCP Podcast Episode 249: ML Lifecycle with Dale Markowitz and Craig Wiley podcast AI Edition Google’s Tracy Frey: Creating Responsible AI podcast TensorFlow Responsible AI Toolkit site What’s something cool you’re working on? Priyanka has been working on the Vertex AI video series. Episode 1 and episode 2 are available now!
This week on the show, our guests Anu Srivastava and Sudheera Vanguri talk about Document AI with hosts Stephanie Wong and Dale Markowitz. Document AI uses artificial intelligence to improve the way businesses create and manage things like paystubs, tax forms, contracts, and virtually any other business document. Data normally stored on paper can be parsed, enriched, and structured, then stored securely with the use of Document AI. Data becomes more accessible and more manageable. Our guests go on to describe the process of using this powerful tool and instances where developers and enterprise companies could benefit. We talk about Lending DocAI and Procurement DocAI and how offerings like Google Vision and Knowledge Graph enhance these powerful tools. Users of Document AI can take advantage of these tools as well as bring their own expertise to create custom models. Later, we learn about the developer experience when using the Document AI Platform. Our guests talk specifically about the use of Knowledge Graph and how the advanced search capabilities allow Document AI users to collect data from myriad sources, filling in missing information and enhancing the search with other useful data to make your results more usable. To demonstrate the use of the platform and integrated Google AI tools, we hear about the real-world examples of Workday and Mr. Cooper and their document processing and model training. Sudheera Vanguri Sudheera Vanguri is the head of Product Management at Google Cloud Document AI. Anu Srivastava Anu Srivastava is an Applied AI Engineer for ML on Google Cloud. Before that, she was a software engineer in Android Google Cloud Infrastructure. Cool things of the week A handy new Google Cloud, AWS, and Azure product map blog Compare AWS and Azure services to Google Cloud docs Google Cloud and Seagate: Transforming hard-disk drive maintenance with predictive ML blog Interview Document AI site BigQuery site Lending DocAI site Procurement DocAI site Cloud Natural Language site Google Vision AI site Google Knowledge Graph site Cloud Translation site Workday site Mr. Cooper site AODocs site Processors overview site Python Codelab site Getting started with the Document AI platform video What’s something cool you’re working on? We’ve been working hard on Google I/O.
On the show this week, Mark Mirchandani joins Stephanie Wong to talk about serverless computing and the Cloud OnAir Serverless event with our guests. Aparna Sinha and Philip Beevers start the show giving us a thorough definition of serverless infrastructures and how this setup can help clients run efficient and cost-effective projects with easy scalability and observability. Serverless has grown exponentially over the last decade, and Aparna talks about how that trajectory will continue in the future. At its core, the serverless structure allows large enterprise companies to do what they need to do, from analyzing real time information to ensuring dinner is delivered piping hot. Aparna describes the three aspects of next generation serverless, developer centricity, versatility, and built-in best practices, and how Google is using these to empower developers and company employees to create robust projects efficiently and economically. Phil tells us about the experience of using serverless products and the success of the three pillars in Google serverless offerings. Enterprise customers like MediaMarktSaturn and Ikea are taking advantage of the serverless system for e-commerce, data processing, machine learning, and more. Our guests describe client experiences and how customer feedback is used to help improve Google serverless tools. With so many serverless tools available, our guests offer advice on choosing the right products for your project. We also hear all about the upcoming Cloud On Air event and what participants can expect, from product announcements and live demos to thorough reviews of recently added serverless features. Aparna Sinha Aparna Sinha is Director of Product at Google Cloud and the product leader for Serverless Application Development and DevOps. She is passionate about transforming businesses through faster, safer software delivery. Previously, Aparna helped grow Kubernetes into a widely adopted platform across industries. Aparna holds a PhD in Electrical Engineering from Stanford. She is Chair of the Governing Board of the Cloud Native Computing Foundation (CNCF). She lives in Palo Alto with her husband and two kids. Philip Beevers Phil has been at Google for seven years. He currently leads the Serverless Engineering teams and previously ran the Site Reliability Engineering team for Google Cloud and Google’s internal Technical Infrastructure. Phil holds a BA in Mathematics from Oxford University. Cool things of the week The evolution of Kubernetes networking with the GKE Gateway controller blog Network Performance for all of Google Cloud in Performance Dashboard site Go from Database to Dashboard with BigQuery and Looker blog Introducing Open Saves: Open-source cloud-native storage for games blog Interview Cloud Run site Cloud Functions site Serverless Computing site The power of Serverless: Get more done easily site App Engine site Building Serverless Applications with Google Cloud Run book MediaMarktSaturn site Ikea site Airbus site Veolia site Sound Effects Attribution “Fanfare1” by N2P5 of “Banjo Opener” by Simanays of
Kaslin Fields joins Stephanie Wong hosting the podcast this week as we talk all about GKE Autopilot with our guests Yochay Kiriaty and William Denniss. GKE Autopilot manages tasks like quantity and size of nodes so deploying workloads is faster and machines are used efficiently. Autopilot also offers cluster management options, including monitoring the health of nodes and other components. William and Yochay explain that GKE Autopilot was built to aid companies in the efficient use of resources and give clients more time to focus on their projects. Important efficiency features that are optional in GKE, like multidimensional pod autoscaling, are employed automatically for clients in Autopilot, giving clients peace of mind. Kubernetes best practices are auto-deployed for projects so clients can rest assured things will run as quickly and smoothly as possible without extra work. Kubernetes is a great way to manage containers, and our guests describe cases where this tool is best suited. We compare GKE standard mode and Autopilot, and Yochay tells us when developers might choose standard mode to allow for more specific customization. He talks about migrating between standard and Autopilot clusters with the goal of easy migration by the end of this year. Security is important for GKE, and we talk about the Autopilot security configurations and why they were chosen. Later, our guests walk us through the process of a Kubernetes project on Autopilot, highlighting decisions this tool makes automatically for you and why. Though Autopilot sounds very much like a serverless offering, William explains the differences between tools like Cloud Run and GKE Autopilot. We also hear about the future of Autopilot, including some exciting new features coming soon. Yochay Kiriaty Yochay is a Product Manager for GKE responsible for security. William Denniss William is a Product Manager for GKE Autopilot. He’s currently writing a book called Kubernetes Quickly. Cool things of the week Google Cloud Region Picker site Faster, cheaper, greener? Pick the Google Cloud region that’s right for you blog 5 resources to help you get started with SRE blog Interview Kubernetes site GKE site Autopilot Overview docs GCP Podcast Episode 252: GKE Cost Optimization with Kaslin Fields and Anthony Bushong podcast Multidimensional Pod Autoscaling docs Docker site Cloud Run site Introducing GKE Autopilot: a revolution in managed Kubernetes blog Creating an Autopilot cluster docs What’s something cool you’re working on? Kaslin has been working on KubeCon EU as a volunteer and will be presenting there as well.
This week on the podcast, Stephanie Wong and Alexandrina Garcia-Verdin are diving into an important topic for our global community: sustainability and carbon aware computing. Kendal Smith, program manager for Carbon Intelligent Computing, and Chris Talbott, leader of the sustainability product marketing efforts at Google Cloud, start the show telling us why sustainability is so important in the tech world. Environmentally conscious data centers are an important part of Google Cloud sustainability efforts. Using computing in the smartest way possible, Kendall tells us, is the root of green computing. Wind, solar, and other low or carbon-free energy sources are employed at Google Cloud data centers to accomplish this goal. Kendall and Chris detail the green goals Google has met or exceeded, including carbon neutrality in 2007, and future goals for Google. Chris explains how Google Cloud customers have taken advantage of Google’s sustainability practices and been inspired in their own businesses. Kendall details the Carbon Intelligent Computing Platform and how they adjust compute times to align with available carbon-free energy. We hear about Google’s sustainability metrics, including the Carbon Free Energy Percentage, and how these measurements can help Google and its customers run environmentally friendly applications. Chris describes the process he and his team go through when helping Google clients design their carbon aware strategy. To wrap up the show, our guests talk about the future of de-carbonized computing at Google. Kendal Smith Kendal is the Program Manager for Carbon Intelligent Computing at Google, which reduces the carbon footprint of Data Centers by exploiting flexibility in compute workloads. She also helps Google engineers build products efficiently, as well as advise other Bets on carbon measurement and tracking. Chris Talbott Chris leads sustainability product marketing and customer engagement efforts for Google Cloud, and works on opening new Google Cloud data centers throughout the globe. He helps customers improve the environmental impact of their IT operations and identify new opportunities to tackle climate change challenges with cloud technology. Cool things of the week Active Assist’s new feature, predictive autoscaling, helps improve response times for your applications site Maximizing developer productivity video Interview Google Carbon Aware Computing Workshop 2021 site Our data centers now work harder when the sun shines and wind blows blog How carbon-free is your cloud? New data lets you know blog Google Cloud Region Picker site What’s something cool you’re working on? Alexandrina is working on a new series called People & Planet AI. The first episode, Recovering global wildlife populations using ML is out now. She’s also been working on internal websites to share climate information. Stephanie has been working on a blog post about AppSheet Automation, which we talked about in-depth last week on the podcast.
Stephanie Wong and co-host Carter Morgan learn all about the no-code experience of AppSheet Automation this week. Guests Jennifer Cadence and Prithpal Bhogill introduce us to AppSheet, a platform that empowers anyone to build applications without code. The strong focus on openness means AppSheet offers support for all manner of APIs and services, making it easy to use and customize. Jennifer starts by telling us how AppSheet increases productivity and satisfaction at work. She describes how people’s individual characteristics and use of time affect productivity and explains that tasks that can be automated free people up to work on higher value tasks or focus on important issues. Employees are not only more productive but happier in their jobs when mundane or frustrating tasks are automated. Later, Prithpal describes using the software. The AppSheet Unified Platform supports any application creator so users can build their apps and automations without ever leaving the AppSheet dashboard. Data stays where it is, with no upload requirements, further easing the build process. We hear some real-world uses of AppSheet Automation, including employee onboarding, customer support, and more. Prithpal takes us behind the scenes, using examples to explain the inner workings of AppSheet and walks us through the steps of using this powerful tool. Jennifer tells us how the AppSheet Community helps shape the platform and talks about the future of AppSheet Automation. Jennifer Cadence Jennifer is the Product Marketing Manager for AppSheet. She’s also a dog lover, community builder, and curious human. Prithpal Bhogill Prithpal is the Lead Product Manager for AppSheet, frequent blogger, and featured speaker on several tech conferences. Cool things of the week Choose your own cloud adventure video Recovering global wildlife populations using ML blog Introduction to AI Platform (Unified) docs Interview AppSheet site AppSheet Community site Invisible Woman book Apps Script site Workspace site What’s something cool you’re working on? Stephanie and Carter are working on some new features for the podcast! Stephanie will be speaking at CTC. Sound Effects Attribution “Applause 1” by Ichapman1980 of
Brian Dorsey joins Stephanie Wong this week for an in-depth discussion on Workflows. Guests Kris Braun and Guillaume Laforge introduce us to Google Cloud Workflows, explaining that this fully managed serverless product helps connect services in the cloud. By facilitating the creation of an end-to-end schema, Workflows lets developers specify what microservices or other software respond when certain events occur in a detailed, visual format. Kris and Guillaume list the benefits of using Workflows and detail the many uses for this powerful tool. The ability to add detailed descriptors, for example, helps companies avoid errors in calling up other pieces of software. New employees have an easier time getting acquainted when the steps are clearly defined as well. Our guests use real-world examples to illustrate the three main uses for Workflows: event-driven, batch, and infrastructure automation. Workflows are flexible and customizable. Later, we hear about Cloud Composer and its relation to Workflows, and our guests help us understand which product is right for each client type. The Workflows team continues to expand offerings. More connectors are being added to allow developers to call other GCP services. Working with lists will soon be easier, allowing Workflows to run steps in parallel. And Kris details other exciting updates coming soon, including Eventarc. Kris Braun Kris Braun is the Product Manager for three Google Cloud products that connect services to build applications: Workflows, Tasks, and Scheduler. Before Google, Kris’ adventures include founding and growing startups, leading a team of network security researchers investigating threats like Stuxnet, and writing the original BlackBerry simulator for app development. He’s a passionate advocate for opening job opportunities to skilled refugees displaced by war and disaster. Guillaume Laforge Guillaume Laforge is a Developer Advocate for Google Cloud, focusing on serverless technologies. More recently, he dived head first in Workflows, and started presenting the product at online events, wrote articles, tips and tricks, and videos on the topic. Cool things of the week How sweet it is: Using Cloud AI to whip up new treats with Mars Maltesers blog Turbo boost your Compute Engine workloads with new 100 Gbps networking blog Benchmarking higher bandwidth VM instances docs Interview Workflows site Spanner site Cloud SQL site Cloud Composer site Pub/Sub site Cloud Run site Eventarc site Eventarc Documentation docs Workflows Insiders site Quickstarts site How-To Guides site Syntax Reference site Guillaume’s Workflow Tips and Tricks blog A first look at serverless orchestration with Workflows blog Orchestrating the Pic-a-Daily serverless app with Workflows blog Better service orchestration with Workflows blog Get to know Workflows, Google Cloud’s serverless orchestration engine blog 3 common serverless patterns to build with Workflows blog Introduction to serverless orchestration with Workflows codelab Pic-a-Daily Serverless Workshop codelab Pic-a-daily: Lab 6—Orchestration with Workflows codelab What’s something cool you’re working on? Brian is working on use cases around VMs. Stephanie has been writing about database migration.
Hosts Stephanie Wong and Priyanka Vergadia learn about data governance this week in an interesting chat with Jessi Ashdown and Uri Gilad. While data governance includes security measures, the overarching term also means knowing your data, where it is, and how to use it. In their book, Jessi, Uri, and their co-authors hope to make data governance more accessible by sharing the knowledge Google has developed over twenty plus years. We talk about the challenges companies of all sizes face implementing data governance frameworks and Uri shares a few tips for streamlining the process. Communication and prioritization are important no matter the size of your team. Companies must also understand the sensitivity of the data, how it’s protected and managed, and why it’s collected. Having a thoughtful, thorough understanding of what data gives you the most bang for your buck can help companies prioritize certain data collection, make better decisions, scale efficiently, and save money. When communicating with team members, it’s important to share vital information about the data. Knowing who’s in charge of what data, for example, makes accessing that data faster. With proper communication and thorough prioritization, teams can begin to think about how developing automated tools can increase functional data utilization. Later, we discuss the ways companies can support employees on the data governance journey by clearly communicating the best practice rules. Uri describes how Google uses data governance principles and shares resources Google has published that detail these steps further. Tools like BigQuery and Data Catalog are Google-built products meant to provide companies with more automated data governance solutions. Jessi and Uri wrap up the show with some more best practices in the data governance sphere, like proper metadata to increase the trustworthiness of data. And Uri details the tools Google Cloud has developed to make your data life easier, giving examples of companies putting these tools into practice. Jessi Ashdown Jessi Ashdown is a User Experience Researcher for Google Cloud who conducts user studies with customers from all over the world and uses the findings and feedback from these studies to help inform and shape Google’s data governance products to best serve those users’ needs. Uri Gilad Uri is leading the Data Governance efforts, within the Data Analytics area in Google Cloud. As part of his role, Uri is spearheading a cross-functional effort to create the relevant controls, management tools and workflows that enable a GCP customer to apply Data Governance policies in a unified fashion wherever your data may be in your GCP deployment. Prior to Google, Uri served as an executive in multiple Data Security companies: most recently as the VP of product in MobileIron, a public Zero Trust/Endpoint security platform. Uri was an early employee and a manager in CheckPoint and Forescout - two well known Security brands. Uri holds an from Tel Aviv University and a from the Technion, Israel’s Institute of Technology. You can find him on Linkedin. Cool things of the week Batter up! Anthos on bare metal helps MLB gear up for upcoming season blog Introducing Network Connectivity Center: A revolution in simplifying on-prem and cloud networking blog Interview Data Governance: The Definitive Guide: People, Processes, and Tools to Operationalize Data Trustworthiness book Goods White Paper doc Dremel White Paper doc BigQuery site Data Catalog site Identity and Access Management site Strata Data Superstream Series event What’s something cool you’re working on? Priyanka has been working on GCP Comics and new GCPSketchnotes. Stephanie is working on an animated series about data centers.
This week on the podcast, fellow Googlers Kaslin Fields and Anthony Bushong chat with hosts Mark Mirchandani and Stephanie Wong about how to optimize your spending with Google Kubernetes Engine. Cost optimization doesn’t necessarily mean lower costs, Kaslin explains. It means running your application the best possible way and accommodating things like traffic spikes while keeping costs as low as possible. As our guests tell us, standard best practices can aid in optimization, but when it comes to efficiently running on a budget, there are more tips and tricks available in GKE. One of GKE’s newest operation modes, Autopilot, means Kubernetes nodes are now managed by Google. Customers pay by the pod so the focus can be on the application rather than the details of clusters and their optimization. Best practices for resource utilization and autoscaling are included with Autopilot. Kaslin and Anthony break up Google’s GKE cost optimization tips into four categories: multi-tenancy, autoscaling, infrastructure choice, and workload best practices and tell us how company culture effects these decisions. Proper education around Kubernetes and GKE specifically is the first step to using resources the most efficiently, Anthony tells us. Keeping tenants separate and resources well managed on multi-tenant clusters is made easier with Namespaces. Scaling pods and the infrastructure around them is an important part of optimization as well, and Anthony helps us understand the best practices for fine tuning the autoscaling features in GKE. Scaling infrastructure to handle spikes or lulls is an automatic feature with Autopilot, helping projects run smoothly. To control workloads efficiency, GKE now offers a host of features, including horizontal, vertical, and multidimensional pod autoscaling. Later, we walk through the steps for implementing some of these optimizations decisions while keeping your application running. GKE Usage Metering is a useful tool for measuring tenant usage in a cluster so resource distribution can be managed easier. Kaslin Fields Kaslin is a Developer Advocate at Google Cloud where she focuses on Google Kubernetes Engine. Anthony Bushong Anthony is a Specialist Customer Engineer at Google Cloud, where he focuses on Kubernetes. Cool things of the week A2 VMs now GA—the largest GPU cloud instances with NVIDIA A100 GPUs blog How carbon-free is your cloud? New data lets you know blog Our third decade of climate action: Realizing a carbon-free future blog Interview Kubernetes site GKE site Best practices for running cost-optimized Kubernetes applications on GKE docs Docker site Autopilot overview docs Namespaces docs Kubernetes best practices: Organizing with Namespaces blog Optimize cost to performance on Google Kubernetes Engine video Using node auto-provisioning docs Scaling workloads across multiple dimensions in GKE blog Enabling GKE usage metering docs Kubernetes in Google Cloud Qwiklabs site Kubernetes Engine Qwiklabs site Cloud Operations for GKE Qwiklabs site Earn the new Google Kubernetes Engine skill badge for free blog Beyond Your Bill videos Cloud On Air Webinar: Hands-on Lab: Optimizing Your Costs on Google Kubernetes Engine site Cloud OnBoard site Adopting Kubernetes with Spotify video
Stephanie Wong joins our old pal Mark Mirchandani this week to chat about BeyondCorp Enterprise and the way enterprise companies are using this security software. Ameet starts the show explaining BeyondCorp’s three pillars of security, including how detailed customer and client knowledge aid in security. Kiran elaborates, stressing the importance of the web browser’s contribution to a secure experience. With BeyondCorp Enterprise offerings, companies can layer additional protections in the cloud, supplementing the often lacking network model and adding better security protections across devices. BeyondCorp offers a simpler implementation structure as well. Things like monitoring can be switched on with a click. We hear about the features of BeyondCorp, including how users help shape the way BeyondCorp protects their projects. Ameet walks us through how a client could add BeyondCorp to their current security infrastructure and the specific benefits of doing so. BeyondCorp Enterprise, an easy off-the-shelf offering, was inspired by Google’s own security measures. With automatic added protections in Chrome, BeyondCorp Enterprise takes the most secure browser in the world and ups the game for enterprise employees working from any device. Kiran describes these additional measures and why they’re important for enterprise users. Ameet and Kiran tell us the steps required to set up the software and the customizations available. Enterprise customers should think through groups of users and what will be allowed by each. On the browser side, the three tiers of security features, including invisible features, can be implemented and changed easily. With the new BeyondCorp Enterprise, enterprise clients are now able to take advantage of the advanced security of the cloud. Through real company examples, Ameet and Kiran share with us the ways this software is already changing the enterprise security game. Kiran Nair Kiran Nair is a product manager on Google Chrome. His focus area is security, and keeping Chrome users safe from web based threats. Besides spending the last 12 years building software and hardware products, Kiran is a certified yoga trainer and enjoys a casual game of tennis in the evening Ameet Jani Ameet is the product manager for BeyondCorp Enterprise. Cool things of the week Introducing #AskGoogleCloud: A community driven YouTube live series blog Cloud On Air: Build the future with Google Kubernetes Engine (GKE) event Google Cloud Born-Digital Summit: Inspiring the next generation of technology leaders blog Interview BeyondCorp site BeyondCorp Enterprise on Google site GCP Podcast Episode 221: BeyondCorp with Robert Sadowski podcast An overview: “A New Approach to Enterprise Security” research paper How Google did it: “Design to Deployment at Google” research paper Google’s frontend infrastructure: “The Access Proxy” research paper Migrating to BeyondCorp: “Maintaining Productivity while Improving Security” research paper The human element: “The User Experience” research paper Secure your endpoints: “Building a Healthy Fleet” research paper Question of the week Can you clearly explain GCP policy resource inheritance? What does it mean when the policy is effectively a union or additive? Resource Manager Understanding hierarchy evaluation Guide to Cloud Billing Resource Organization & Access Management
Jenny Brown and Mark Mirchandani are back this week to celebrate a special anniversary! This year marks ten years since the launch of the first Chromebooks, and our guests, Angela Gosz and Courtney Harrison, are here to reflect on the past and talk about the future Chrome OS. Chromebooks powered by Intel allow users to get the most out of their endpoints, serving as a secure and stable entrypoint to the Cloud. Our guests describe the key groups of Chromebook users and how the security, ease of use, and portability of Chrome OS benefits each group. The Google Admin Console allows more than 500 customizable security features to tailor the experience for employees or end customers, Angela explains. The changes brought on by the pandemic meant more companies had to support a distributed business, and Chrome OS has been able to facilitate this transition easily. With zero-touch enrollment, Chromebooks can be sent directly to employees, bypassing IT. Chromebooks can be configured through the Google Admin Console without any physical contact. Courtney tells us about her experiences with Chrome OS at Intel and how the automatic updates, computing speed, and other features have made her job easier. She explains the process of working with Google to develop Chromebook hardware and how the cloud comes into play for maximum performance. We talk about the many Chromebook options offered and what options will be available in the future. Angela Gosz Angela Gosz is a Customer Success Manager on the Chrome Enterprise Team, based out of Google Chicago. With 17 years of experience in the IT Industry, Angela has been on the leading edge of digital transformation implementations, supporting Enterprise organizations and partners to adopt and optimize their endpoint computing strategy - especially in Healthcare. Today she ensures customers realize the full potential of their investment in Chrome OS as a cloud-first endpoint. Outside of work, she has been meditating daily for 5 years, teaches yoga and is a certified Reiki practitioner. Angela holds a Bachelor’s Degree in Journalism from the University of Wisconsin-Madison. Courtney Harrison Courtney is an Account Director with Intel Corporation based in the San Francisco Bay area. Currently Courtney leads a team that supports all of the Intel business interactions with Alphabet and Google. A twenty-one year Intel veteran, Courtney has spent the past fifteen years in field sales working with top multi-national customers and local OEMs. Courtney began her career at Intel in CPU operations. Courtney has both a Bachelor’s and Master’s Degree from Stanford University in Industrial Engineering. Cool things of the week A new podcast explores the unseen world of data centers blog Back by popular demand: Google Cloud products in 4 words or less (2021 edition) blog Save the date for Google Cloud Next ‘21: October 12-14, 2021 blog Interview Intel site Chromebook site Chrome OS site Chromebook turns 10 site Building the future of business computing: 10 years of Chrome OS blog Form Factor Portfolio site Deploy devices with zero-touch enrollment site Thunderbolt site WiFi 6 site CloudReady site MCA site What’s something cool you’re working on? Mark is working on Costs meet code with programmatic budget notifications. Sound Effects Attribution “LeDancing” by Frankum of “Jingle Romantic” by Jay_You of
Jenny Brown co-hosts with Mark Mirchandani this week for a great conversation about the ML lifecycle with our guests Craig Wiley and Dale Markowitz. Using a real-life example of bus cameras detecting potholes, Dale and Craig walk us through the steps of designing, building, implementing, and improving on a piece of machine learning software. The first step, Craig tells us, is to identify the data collected and determine its viability in an ML model. He describes how to get the best data for your project and how to keep the data, code, and libraries consistent to allow better analysis by your ML models. He talks about the importance of a Feature Store to aid in data consistency. Craig explains how machine learning pipelines like TensorFlow are great tools to improve consistency in the ML environment as well, making it easier to improve your model and even to build new ones using the same data. Keeping this consistency from data scientist analyzation to ML developer to model deployment means a more efficient process and product. Evaluating models after production is an important step in the lifecycle as well to ensure accuracy, validity, and performance of the model. Craig gives us some examples and tips on monitoring models after they’ve been deployed. We talk about the challenges of scaling ML projects and Craig offers advice for developers and companies looking to build ML projects. Dale Markowitz Dale Markowitz is an Applied AI Engineer for ML on Google Cloud. Before that, she was a software engineer in Google Research and an engineer at the online dating site OkCupid. Craig Wiley Craig is the Director of Product for Google Cloud’s AI Platform. Previous to Google, Craig spent nine years at Amazon, as the General Manager of Amazon SageMaker, AWS’ machine learning platform as well as in Amazon’s 3rd Party Seller Business. Craig has a deep belief in democratizing the power of data, he pushes to improve the tooling for experienced users while seeking to simplifying it for the growing set of less experienced users. Outside of work he enjoys spending time with his family, eating delicious meals, and enthusiastically struggling through small home improvement projects. Cool things of the week Introducing GKE Autopilot: a revolution in managed Kubernetes blog At your service! With schedule-based autoscaling, VMs are at the ready blog Interview Google Cloud AI and Machine Learning Products site GCP Podcast Episode 240: reCAPTCHA Enterprise with Kelly Anderson + Spring ML Potholes with Eric Clark podcast Using machine learning to improve road maintenance blog Key requirements for an MLOps foundation blog TensorFLow site Kubeflow Pipelines site TensorBoard site How to dub a video with AI video Can AI make a good baking recipe? video Machine learning without code in the browser video What’s something cool you’re working on? Jenny started a new podcast that reads interesting Google blog posts over at Google Cloud Reader. Our friend Dr. Anton Chuvakin started the Cloud Security Podcast by Google. Read more about it and listen here. Follow the show and hosts on Twitter Cloud Security Podcast Anton and Tim And listen to Anton on the GCP Podcast Episode 218: Chronicle Security with Dr. Anton Chuvakin and Ansh Patniak.
Mark Mirchandani and Stephanie Wong are back this week as we learn about all the new things happening with Google Cloud Spanner. Our guests this week, Dilraj Kaur and Christoph Bussler, describe Cloud Spanner as a fully managed relational database that boasts unlimited scaling and advanced consistency and availability. Unlimited scaling truly means unlimited, and Chris explains why Cloud Spanner offers this feature and how it’s making database design and development easier. Dilraj and Chris tell us all about the cool new features Spanner has developed, like generated columns and foreign keys, and how customer needs influenced these developments. Chris walks us through the process of using some of these new features, including how developers can monitor their database systems. Managed backups and multi-region configuration are additional recent additions to Cloud Spanner, and our guests explain how these are used by current enterprise clients. Dilraj and Chris explain the automatically managed features of Spanner versus the customer managed features and how people set up and manage database projects. We hear examples of companies using Cloud Spanner and how it has improved their businesses. Dilraj Kaur Dilraj Kaur is an Enterprise Customer Engineer with specialization in Data Management. She has been with Google for about 2.5 years and is based in Atlanta. Christoph Bussler As a Solutions Architect Chris is focusing on databases, data migration and data integration in enterprise customer settings. See his professional work and background on his website. Cool things of the week New to Google Cloud? Here are a few free trainings to help you get started blog Start your skills challenge today site Service Directory is generally available: Simplify your service inventory blog Interview Google Cloud Spanner site GCP Podcast Episode 62: Cloud Spanner with Deepti Srivastava podcast Using the Cloud Spanner Emulator docs Cloud Spanner Ecosystem site Cloud Spanner Qwiklabs site Google Cloud Platform Community On Slack site Creating and managing generated columns docs WITH Clause docs Foreign Keys docs Numeric Data Type docs Information schema docs Overview of introspection tools docs Backup and Restore docs Multi-region configurations docs ShareChat: Building a scalable data-driven social network for non-English speakers globally site Streamlining infrastructure for the world’s most dynamic financial market site What is Cloud Spanner? video What’s something cool you’re working on? Mark has been working on budgeting blog posts, including Protect your Google Cloud spending with budgets. Stephanie is working on her data center animation series
This week on the podcast, Mark Mirchandani and Gabi Ferrara talk with Nimesh Bhagat about Cloud SQL Insights. This powerful tool enables developers to diagnose database issues for faster, smoother performance. Nimesh tells us the inspiration for Cloud SQL Insight’s development and describes its biggest benefits. One of the important aspects of Insights is the ability for developers to gain an application-centric view by allowing them to tag database queries with SQL comments. These tags are aggregated in Insights and give developers a visual of the database queries. Here, developers can see load patterns and use that information to improve database efficiency. Cloud SQL Insights offers managed database analysis that helps developers understand the past and predict the future. Simplifying the journey of database debugging, Nimesh explains, was the goal of creating Cloud SQL Insights. He takes us through the process of using the software, pointing out the improvements Insights makes over the old way. Cloud SQL Insights only launched in January, but it’s already helping numerous clients with their projects. Nimesh describes these real-world uses, including Major League Baseball experience as part of Insights Early Access Program. Nimesh Bhagat Nimesh is a product manager at Google Cloud, he leads Cloud SQL Insights. He has worked across engineering and product roles, building highly available and high performance enterprise infrastructure used by Fortune 500 companies. His passion lies in combining powerful infrastructure with simple user experience so that every business and developer can build software at scale and velocity. Cool things of the week A new collaboration with Google Cloud blog Don’t fear the authentication: Google Drive edition blog Interview Cloud SQL Insights docs Cloud SQL Documentation docs GCP Podcast Episode 163: Cloud SQL with Amy Krishnamohan podcast Google Cloud Monitoring site Database observability for developers: introducing Cloud SQL Insights blog Introduction to Cloud SQL Insights codelab Boost your query performance troubleshooting skills with Cloud SQL Insights blog Introducing Sqlcommenter: An open source ORM auto-instrumentation library blog Introducing Cloud SQL Insights video Cloud SQL Github site What’s something cool you’re working on? Gabi is working on several things, including Schema Migrations with CI/CD pipelines. She is always available on Twitter and she offers free office hours! Sound Effects Attribution “Small Audience Laugh” by Tim Kahn of
Former GCP Podcast host Mark Mandel is our guest this week. He’s talking Google Cloud Game Servers, Agones, and more with Mark Mirchandani and guest host Stephanie Wong. Mark explains how dedicated game servers work and why gaming has embraced the idea of dedicated servers. Online multiplayer gaming with its need for fast, consistent state sharing among players benefits from dedicated servers and offers cheating mitigation and reduced latency, as well as development flexibility. He tells us a little about the history of the open source project, Agones, and how it has helped Kubernetes run memory-state games efficiently on these dedicated servers. Google Cloud Game Servers work with layers of products to create a seamless multiplayer environment. Mark details this process and how Kubernetes, GKE, and Agones work together with these servers to accomplish this goal at scale. This situation is ideal for developers looking for the customizability and flexibility of a self-controlled system rather than a fully managed lift and shift model. Mark talks about the features of GCGS, including the versioning configuration system that allows you to create multiple configurations, and roll outs that give you control over distribution. We also learn a little about game building best practices and how Mark and his team advise and educate other game developers. Mark Mandel Mark Mandel is a Developer Advocate for the Google Cloud Platform. Hailing from Australia, Mark built his career developing backend web applications which included several widely adopted open source projects, and running an international conference in Melbourne for several years. Since then he has focused on becoming a polyglot developer, building systems in Go, JRuby and Clojure on a variety of infrastructures. In his spare time he plays with his dog, trains martial arts, and reads too much fantasy literature. Cool things of the week Google Cloud Docs Samples docs Limiting public IPs on Google Cloud blog Interview Google Cloud Game Servers site Agones site Agones Prerequisite Knowledge docs Kubernetes site GKE site Online Game Technology, Drawn Badly videos GCP Podcast Episode 142: Agones with Mark Mandel and Cyril Tovena podcast GCP Podcast Episode 202: Supersolid with Kami May podcast Multiplay site Accelbyte site Improbable site Find the right Google Cloud partner site Game Developers Conference site Agones on Slack site Agones on Twitter site Mark Mandel on Twitch site Mark Mandel on YouTube site What’s something cool you’re working on? Stephanie is working on Season of Scale season 5 and a data center animated series that will launch in a few weeks! Sound Effects Attribution “TrumpetBrassFanfare.wav” by ohforheavensake of “8-bit Video Game Sounds.wav” by ProjectsU012 of “music elevator.wav” by Jay_You of
Dustin Dye and Alex Seegers of Botcopy are on the show today, chatting with hosts Mark Mirchandani and Priyanka Vergadia. Botcopy uses advanced AI technology along with excellent copy writing to create better chat bots. The software works directly on a company website and doesn’t require a login, allowing chats to stay anonymous. Our guests explain that their chat bots are treated like virtual employees, built and trained to function and speak appropriately for their specific job. Copy writing is an important part of this, as the conversational AI should continue to support the brand being represented and conversations should flow naturally. The bot personalities are developed through written copy and interactions with customers in instances like customer service, lead generation, and even some internal employee management needs. Later, we talk about how Dialogflow and Botcopy work together, including how Botcopy adds important user context to the conversation to facilitate more accurate bot responses. We hear more about Dialogflow CX and how the modular builder makes designing and controlling bot conversations easier. CX has also made managing multiple bots on a single account easier and team collaboration more efficient. The visual builder available in CX offers a better chatbot design experience, especially when multiple teams are working on the same bot. We hear examples of great use-cases for Botcopy, like restaurant menus, clinical trials, and more. Alex and Dustin give developers valuable advice about working with clients to build their bots. Test early and often to build a robust bot capable of handling many situations. It’s important to have an analytics system in place to identify possible improvement areas as well. Dustin Dye Dustin Dye is co-founder and CEO of Botcopy. After developing branded character and dialogue content for the #1 business bots on Messenger and Slack, Dustin launched Botcopy in 2017. Before co-founding Botcopy, Dustin had co-founded Expert Dojo, one of Silicon Beach’s largest startup incubator, serving, mentoring, and securing funding for some of the most exciting businesses coming out of LA. Dustin is a frequent keynote speaker at leading Chatbot conferences in the US and abroad. Alexander Seegers Alexander Seegers is a co-founder and COO of Botcopy and heads up the product team. He holds a Business degree from Northeastern and UX certification from General Assembly. Alex has consulted tech leaders at Fortune 500 companies worldwide, spearheading their forays into conversational AI for multiple use cases at the enterprise level. In addition to big-picture leadership and vision, Alex is adept at numerous coding languages and complex systems architecture. Cool things of the week Introducing WebSockets, HTTP/2 and gRPC bidirectional streams for Cloud Run blog Take the first step toward SRE with Cloud Operations Sandbox blog Interview Botcopy site Botcopy Blog blog Contact Botcopy email Dialogflow site Miro site What’s something cool you’re working on? Priyanka is working on Dialogflow CX episodes for the Deconstructing Chatbots series.
Welcome back to a new year of Google Cloud Platform Podcasts! Mark Mirchandani and Emma Iwao host the first show of 2021 with special guest Rebecca Weekly of Intel. She joins us to talk about the partnership between Google Cloud and Intel. Describing the company’s goals of gathering, storing, managing, and analyzing data in all its forms to unlock the power of technology and information, Rebecca points out how well these align with Google’s own goals and why the partnership is such a natural fit. Rebecca explains the four pillars of the Google-Intel partnership, including the focus on infrastructure and app modernization to elevate the user experience. Through their work with Google, Intel has been able to optimize the move from on prem to cloud for those clients who choose to make the shift, using their thorough client knowledge and Google Cloud expertise to facilitate a smooth transition. Rebecca walks us through the process of crafting this client experience, from choosing products and tools to identifying and solving any bottlenecks and optimizing the configuration using benchmarks. Later, we talk about the value of open source software in both the hardware and software worlds and why Intel believes so strongly in open source projects. Rebecca offers examples of clients successfully using Intel hardware and Google Cloud software, including Climacell and Kinsta. We get the inside scoop on future projects at Intel, like the next generation of scalable Xeon processors, and Rebecca talks about the future of data analyzation and computing. Intel and the Intel logo are trademarks of Intel Corporation or its subsidiaries. Rebecca Weekly Rebecca leads the team that influences nearly every aspect of our cloud platform solutions across strategic planning, hardware and software enabling, marketing and sales. Together they shape the development, production, and business strategy of Intel’s cloud platforms to ensure differentiation and platforms that enable TAM expansion with enthusiasm, collaboration, and urgency. She drives strategic collaborations with key partners including top cloud service providers, OxMs, ISVs & OSVs to ensure platform requirements meet our customer needs. In her “spare” time, she’s the lead singer of a funk & soul band, Sinister Dexter, was professionally trained in dance (tap, modern, and jazz), and is an experienced choreographer. She has two amazing little boys and loves to run (after them, and on her own). Rebecca graduated from MIT with a degree in Computer Science and Electrical Engineering. Cool things of the week 97 Things Every Cloud Engineer Should Know Book Introducing Google Cloud Workflows video Interview Intel site Google Cloud withe Intel site TensorFlow site Anthos site Intel Select Solutions site PerfKit Benchmarker site Google Cloud Functions site Climacell site Blue Skies Ahead: ClimaCell Delivers Innovative Weather Prediction Solutions doc Kinsta site Benchmarking GCP’s Compute-Optimized VMs (C2) blog Arcules site Descartes Labs site DAOS site Optane site What’s something cool you’re working on? Emma was a guest on GCP Podcast Episode 167: World Pi Day with Emma Haruka Iwao. Emma is working on the Ruby 3.0 support and release and deprecation policy. Ruby is now available on Google Cloud Functions! Sound Effects Attribution “Partyhorn” by Milton of “ToiletFlush” by EminYildirim of
Comments (5)

sindhu n

is it me or this introduces a community of differently abled ppl can join such workforce.. no one in the podcast is able to conceive this idea

Jan 22nd

Raghu Meda

great inspiring story from Zack Akil. wondering how he is balancing the time with so many things he is doing in parallel. good one

Jan 29th

Tianyu Guo

The course mentioned in here is great.

Jan 4th

Mohamed Najid

.' b

May 1st

Mohamed Najid

.' b

May 1st
Download from Google Play
Download from App Store