Claim Ownership

Author:

Subscribed: 0Played: 0
Share

Description

 Episodes
Reverse
Stream processing can be hard or easy depending on the approach you take, and the tools you choose. This sentiment is at the heart of the discussion with Matthias J. Sax (Apache Kafka® PMC member; Software Engineer, ksqlDB and Kafka Streams, Confluent) and Jeff Bean (Sr. Technical Marketing Manager, Confluent). With immense collective experience in Kafka, ksqlDB, Kafka Streams, and Apache Flink®, they delve into the types of stream processing operations and explain the different ways of solving for their respective issues.The best stream processing tools they consider are Flink along with the options from the Kafka ecosystem: Java-based Kafka Streams and its SQL-wrapped variant—ksqlDB. Flink and ksqlDB tend to be used by divergent types of teams, since they differ in terms of both design and philosophy.Why Use Apache Flink?The teams using Flink are often highly specialized, with deep expertise, and with an absolute focus on stream processing. They tend to be responsible for unusually large, industry-outlying amounts of both state and scale, and they usually require complex aggregations. Flink can excel in these use cases, which potentially makes the difficulty of its learning curve and implementation worthwhile.Why use ksqlDB/Kafka Streams?Conversely, teams employing ksqlDB/Kafka Streams require less expertise to get started and also less expertise and time to manage their solutions. Jeff notes that the skills of a developer may not even be needed in some cases—those of a data analyst may suffice. ksqlDB and Kafka Streams seamlessly integrate with Kafka itself, as well as with external systems through the use of Kafka Connect. In addition to being easy to adopt, ksqlDB is also deployed on production stream processing applications requiring large scale and state.There are also other considerations beyond the strictly architectural. Local support availability, the administrative overhead of using a library versus a separate framework, and the availability of stream processing as a fully managed service all matter. Choosing a stream processing tool is a fraught decision partially because switching between them isn't trivial: the frameworks are different, the APIs are different, and the interfaces are different. In addition to the high-level discussion, Jeff and Matthias also share lots of details you can use to understand the options, covering employment models, transactions, batching, and parallelism, as well as a few interesting tangential topics along the way such as the tyranny of state and the Turing completeness of SQL.EPISODE LINKSThe Future of SQL: Databases Meet Stream ProcessingBuilding Real-Time Event Streams in the Cloud, On PremisesKafka Streams 101 courseksqlDB 101 courseWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more on Confluent DeveloperUse PODCAST100 for additional $100 of  Confluent Cloud usage (details)
Apache Kafka® isn’t just for day jobs according to Danica Fine (Senior Developer Advocate, Confluent). It can be used to make life easier at home, too!Building out a practical Apache Kafka® data pipeline is not always complicated—it can be simple and fun. For Danica, the idea of building a Kafka-based data pipeline sprouted with the need to monitor the water level of her plants at home. In this episode, she explains the architecture of her hardware-oriented project and discusses how she integrates, processes, and enriches data using ksqlDB and Kafka Connect, a Raspberry Pi running Confluent's Python client, and a Telegram bot. Apart from the script on the Raspberry Pi, the entire project was coded within Confluent Cloud.Danica's model Kafka pipeline begins with moisture sensors in her plants streaming data that is requested by an endless for-loop in a Python script on her Raspberry Pi. The Pi in turn connects to Kafka on Confluent Cloud, where the plant data is sent serialized as Avro. She carefully modeled her data, sending an ID along with a timestamp, a temperature reading, and a moisture reading. On Confluent Cloud, Danica enriches the streaming plant data, which enters as a ksqlDB stream, with metadata such as moisture threshold levels, which is stored in a ksqlDB table.She windows the streaming data into 12-hour segments in order to avoid constant alerts when a threshold has been crossed. Alerts are sent at the end of the 12-hour period if a threshold has been traversed for a consistent time period within it (one hour, for example). These are sent to the Telegram API using Confluent Cloud's HTTP Sink Connector, which pings her phone when a plant's moisture level is too low.Potential future project improvement plans include visualizations, adding another Telegram bot to register metadata for new plants, adding machine learning to anticipate watering needs, and potentially closing the loop by pushing data backto the Raspberry Pi, which could power a visual indicator on the plants themselves. EPISODE LINKSGitHub: raspberrypi-houseplantsData Pipelines 101 courseTips for Streaming Data Pipelines ft. Danica FineWatch the video version of this podcastDanica Fine's TwitterKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
Apache Kafka® 3.2 delivers new  KIPs in three different areas of the Kafka ecosystem: Kafka Core, Kafka Streams, and Kafka Connect. On behalf of the Kafka community, Danica Fine (Senior Developer Advocate, Confluent), shares release highlights.More than half of the KIPs in the new release concern Kafka Core. KIP-704 addresses unclean leader elections by allowing for further communication between the controller and the brokers. KIP-764 takes on the problem of a large number of client connections in a short period of time during preferred leader election by adding the configuration `socket.listen.backlog.size`. KIP-784 adds an error code field to the response of the `DescribeLogDirs` API, and KIP-788 improves network traffic by allowing you to set the pool size of network threads individually per listener on Kafka brokers. Finally, in accordance with the imminent KRaft protocol, KIP-801 introduces a built-in `StandardAuthorizer` that doesn't depend on ZooKeeper. There are five KIPs related to Kafka Streams in the AK 3.2 release. KIP-708 brings rack-aware standby assignment by tag, which improves fault tolerance. Then there are three projects related to Interactive Queries v2: KIP-796 specifies an improved interface for Interactive Queries; KIP-805 allows state to be queried over a specific range; and KIP-806 adds two implementations of the Query interface, `WindowKeyQuery` and `WindowRangeQuery`.The final Kafka Streams project, KIP-791, enhances `StateStoreContext` with `recordMetadata`,which may be accessed from state stores.Additionally, this Kafka release introduces Kafka Connect-related improvements, including KIP-769, which extends the `/connect-plugins` API, letting you list all available plugins, and not just connectors as before.  KIP-779 lets `SourceTasks` handle producer exceptions according to `error.tolerance`, rather than instantly killing the entire connector by default. Finally, KIP-808 lets you specify precisions with respect to TimestampConverter single message transforms. Tune in to learn more about the Apache Kafka 3.2 release!EPISODE LINKSApache Kafka 3.2 release notes Read the blog to learn moreDownload Apache Kafka 3.2.0Watch the video version of this podcast
How much can Apache Kafka® scale horizontally, and how can you automatically balance, or rebalance data to ensure optimal performance?You may require the flexibility to scale or shrink your Kafka clusters based on demand. With experience engineering cluster elasticity and capacity management features for cloud-native Kafka, Ajit Yagaty (Confluent Cloud Control Plane Engineering) and Aashish Kohli (Confluent Cloud Product Management) join Kris Jenkins in this episode to explain how the architecture of Confluent Cloud supports elasticity. Kris suggests that optimal elasticity is like water from a faucet—you should be able to quickly obtain as many resources as you need, but at the same time you don't want the slightest amount to go wasted. But how do you specify the amount of capacity by which to adjust, and how do you know when it's necessary?Aashish begins by explaining how elasticity on Confluent Cloud has come a long way since the early days of scaling via support tickets. It's now self-serve and can be accomplished by dialing up or down a desired number of CKUs, or Confluent Units of Kafka. A CKU corresponds to a specific amount of Kafka resources and has been made to be consistent across all three major clouds. You can specify the number of CKUs you need via API, CLI or Confluent Cloud UI. Ajit explains in detail how, once your request has been made, cluster resizing is a two-step process. First, capacity is added, and then your data is rebalanced. Rebalancing data on the cluster is critical to ensuring that optimal performance is derived from the available capacity. The amount of time it takes to resize a Kafka cluster depends on the number of CKUs being added or removed, as well as the amount of data to be rebalanced. Of course, to request more or fewer CKUs in the first place, you have to know when it's necessary for your Kafka cluster(s). This can be challenging as clusters emit a large variety of metrics. Fortunately, there is a single composite metric that you can monitor to help you decide, as Ajit imparts on the episode.  Other topics covered by the trio include an in-depth explanation of how Confluent Cloud achieves elasticity under the hood (separate control and data planes, along with some Kafka dogfooding), future plans for autoscaling elasticity, scenarios where elasticity is critical, and much more.EPISODE LINKSShrink a Dedicated Kafka Cluster in Confluent CloudElastic Apache Kafka Clusters in Confluent CloudWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
What are useful practices for migrating a system to Apache Kafka® and Confluent Cloud, and why use Confluent to modernize your architecture?Dima Kalashnikov (Technical Lead, Picnic Technologies) is part of a small analytics platform team at Picnic, an online-only, European grocery store that processes around 45 million customer events and five million internal events daily. An underlying goal at Picnic is to try and make decisions as data-driven as possible, so Dima's team collects events on all aspects of the company—from new stock arriving at the warehouse, to customer behavior on their websites, to statistics related to delivery trucks. Data is sent to internal systems and to a data warehouse.Picnic recently migrated from their existing solution to Confluent Cloud for several reasons:Ecosystem and community: Picnic liked the tooling present in the Kafka ecosystem. Since being a small team means they aren't able to devote extra time to building boilerplate-type code such as connectors for their data sources or functionality for extensive monitoring capabilities. Picnic also has analysts that use SQL so appreciated the processing capabilities of ksqlDB. Finally, they found that help isn't hard to locate if one gets stuck.Monitoring: They wanted better monitoring; specifically they found it challenging to measure for SLAs with their former system as they couldn't easily detect the positions of consumers in their streams.Scaling and data retention times: Picnic is growing so they needed to scale horizontally without having to worry about manual reassignment. They also hit a wall with their previous streaming solution with respect to the length of time they could save data, which is a serious issue for a company that makes data-first decisions. Cloud: Another factor of being a small team is that they don't have resources for extensive maintenance of their tooling.Dima's team was extremely careful and took their time with the migration. They ran a pilot system simultaneously with the old system, in order to make sure it could achieve their fundamental performance goals: complete stability, zero data loss, and no performance degradation. They also wanted to check it for costs.The pilot was successful and they actually have a second, IoT pilot in the works that uses Confluent Cloud and Debezium to track the robotics data emanating from their automatic fulfillment center. And it's a lot of data, Dima mentions that the robots in the center generate data sets as large as their customer events streams. EPISODE LINKSPicnic Analytics Platform: Migration from AWS Kinesis to Confluent CloudPicnic Modernizes Data Architecture with ConfluentData Engineer: Event Streaming PlatformWatch this podcast in videoKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka  resources on Confluent DeveloperLive demo: Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usageBuilding Data Streaming App | Coding In Motion
Coding is inherently enjoyable and experimental. With the goal of bringing fun into programming, Kris Jenkins (Senior Developer Advocate, Confluent) hosts a new series of hands-on workshops—Coding in Motion, to teach you how to use Apache Kafka® and data streaming technologies for real-life use cases. In the first episode, Sound & Vision, Kris walks you through the end-to-end process of building a real-time, full-stack data streaming application from scratch using Kafka and JavaScript/TypeScript. During the workshop, you’ll learn to stream musical MIDI data into fully-managed Kafka using Confluent Cloud, then process and transform the raw data stream using ksqlDB. Finally, the enriched data streams will be pushed to a web server to display data in a 3D graphical visualization. Listen to Kris previews the first episode of Coding in Motion: Sound & Vision and join him in the workshop premiere to learn more. EPISODE LINKSCoding in Motion Workshop: Build a Streaming App for Sound & VisionWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
You already know Apache Kafka® is a distributed event streaming system for setting your data in motion, but how does its internal architecture work? No one can explain Kafka’s internal architecture better than Jun Rao, one of its original creators and Co-Founder of Confluent. Jun has an in-depth understanding of Kafka that few others can claim—and he shares that with us in this episode, and in his new Kafka Internals course on Confluent Developer. One of Jun's goals in publishing the Kafka Internals course was to cover the evolution of Kafka since its initial launch. In line with that goal, he discusses the history of Kafka development, including the original thinking behind some of its design decisions, as well as how its features have been improved to better meet its key goals of durability, scalability, and real-time data. With respect to its initial design, Jun relates how Kafka was conceived from the ground up as a distributed system, with compute and storage always maintained as separate entities, so that they could scale independently. Additionally, he shares that Kafka was deliberately made for high throughput since many of the popular messaging systems at the time of its invention were single node, but his team needed to process large volumes of non-transactional data, such as application metrics, various logs, click streams, and IoT information.As regards the evolution of its features, in addition to others, Jun explains these two topics at great length:Consumer rebalancing protocol: The original "stop the world" approach to Kafka's consumer rebalancing—although revolutionary at the time of its launch, was eventually improved upon to take a more incremental approach.Cluster metadata: Moving from the external ZooKeeper to the built-in KRaft protocol allows for better scaling by a factor of ten. according to Jun, and it also means you only need to worry about running a single binary.The Kafka Internals course consists of eleven concise modules, each dense with detail—covering Kafka fundamentals in technical depth. The course also pairs with four hands-on exercise modules led by Senior Developer Advocate Danica Fine. EPISODE LINKSKafka Internals courseHow Apache Kafka Works: An Introduction to Kafka’s InternalsCoding in Motion Workshop: Build a Streaming AppWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
What is event modeling and how does it differ from standard data modeling?In this episode of Streaming Audio, Bobby Calderwood, founder of Evident Systems and creator of oNote observes that at the dawn of the computer age, due to the fact that memory and computing power were expensive, people began to move away from time-and-narrative-oriented record-keeping systems (in the manner of a ship's log or a financial ledger) to systems based on aggregation. Such data-model systems, still dominant today, only retain the current state generated from their inputs, with the inputs themselves going lost. A converse approach to the reductive data-model system is the event-model system, which is enabled by tools like Apache Kafka®, and which effectively saves every bit of activity that the system generates. The event model actually marks a return, in a sense, to the earlier, narrative-like recording methods.To further illustrate, Bobby uses a chess example to show the distinction between the data model and the event model. In a chess context, the event modeling system would retain each move in the game from beginning to end, such that any moment in the game could be derived by replaying the sequence of moves. Conversely, chess based on the data model would save only the current state of the game, destructively mutating the data structure to reflect it. The event model maintains an immutable log of all of a system's activity, which means that teams downstream from the transactions team have access to all of the system's data, not just the end transactions, and they can analyze the data as they wish in order to make their own conclusions. Thus there can be several read models over the same body of events. Bobby has found that non-programming stakeholding teams tend to intuitively comprehend the event model better than other data paradigms, given its natural narrative form.    Transitioning from the data model to the event model, however, can be challenging. Bobby’s oNote—event modeling platform aims to help by providing a digital canvas that allows a system to be visually redesigned according to the event model. oNote generates Avro schema based on its models, and also uses Avro to generate runtime code.EPISODE LINKSEvent Sourcing and Event Storage with Apache KafkaoNoteEvent ModelingToward a Functional Programming Analogy for MicroservicesEvent-Driven Architecture - Common Mistakes and Valuable Lessons ft. Simon AuburyWatch the video version of this podcastCoding in Motion Workshop: Build a Streaming AppKris Jenkins’ TwitterJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
New Relic runs one of the larger Apache Kafka® installations in the world, ingesting circa 125 petabytes a month, or approximately three billion data points per minute. Anton Rodriguez is the architect of the system, responsible for hundreds of clusters and thousands of clients, some of them implemented in non-standard technologies. In addition to the large volume of servers, he works with many teams, which must all work together when issues arise.Monitoring New Relic's large Kafka installation is critical and of course challenging, even for a company that itself specializes in monitoring. Specific obstacles include determining when rebalances are happening, identifying particularly old consumers, measuring consumer lag, and finding a way to observe all producing and consuming applications.One way that New Relic has improved the monitoring of its architecture is by directly consuming metrics from the Linux kernel using its new eBPF technology, which lets programs run inside the kernel without changing source code or adding additional modules (the open-source tool Pixie enables access to eBPF in a Kafka context). eBPF is very low impact, so doesn’t affect services, and it allows New Relic to see what’s happening at the network level—and to take action as necessary.EPISODE LINKSMonitoring Kafka Without Instrumentation Using eBPFWhat Is eBPF and Why Does It Matter for Observability?Kafka MonitoringKafka Summit: Monitoring Kafka Without Instrumentation Using eBPFWatch the video version of this podcastKris Jenkins’ TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)   
Confluent Platform 7.1 expands upon its already innovative features, adding improvements in key areas that benefit data consistency, allow for increased speed and scale, and enhance resilience and reliability.Previously, the Confluent Platform 7.0 release introduced Cluster Linking, which enables you to bridge on-premises and cloud clusters, among other configurations. Maintaining data quality standards across multiple environments can be challenging though. To assist with this problem, CP 7.1 adds Schema Linking, which lets you share consistent schemas across your clusters—synced in real time.Confluent for Kubernetes lets you build your own private-cloud Apache Kafka® service. Now you can enhance the global resilience of your architecture by employing to multiple regions. With the new release you can also configure custom volumes attached to Confluent deployments and you can declaratively define and manage the new Schema Links. As of this release, Confluent for Kubernetes now supports the full feature set of the Confluent Platform. Tiered Storage was released in Confluent Platform 6.0, and it offers immense benefits for a cluster by allowing the offloading of older topic data out of the broker and into slower, long-term object storage. The reduced amount of local data makes maintenance, scaling out, recovery from failure, and adding brokers all much quicker. CP 7.1 adds compatibility for object storage using Nutanix, NetApp, MinIO, and Dell, integrations that have been put through rigorous performance and quality testing.Health+ was introduced in CP 6.2—offers intelligent cloud-based alerting and monitoring tools in a dashboard. New as of CP 7.1, you can choose to be alerted when anomalies in broker latency are detected, when there is an issue with your connectors linking Kafka and external systems, as well as when a ksqlDB query will interfere with a continuous, real-time processing stream. Shipping with CP 7.1 is ksqlDB 0.23, which adds support for pull queries against streams as opposed to only against tables—a milestone development that greatly helps when debugging since a subset of messages within a topic can now be inspected. ksqlDB 0.23 also supports custom schema selection, which lets you choose a specific schema ID when you create a new stream or table, rather than use the latest registered schema. A number of additional smaller enhancements are also included in the release.EPISODE LINKSDownload Confluent Platform 7.1Check out the release notesRead the Confluent Platform 7.1 blog postWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get $100 of free Confluent Cloud usage (details)
Scaling Apache Kafka® can be tricky, let alone scaling a team. When he was first hired, Domenico Fioravanti of Therapie Clinic was given the challenging task of assembling a sizable tech team from scratch, while simultaneously building a scalable and decoupled architecture from the ground up. In addition, he wanted to deliver value to the company from day one. One way that Domenico ultimately accomplished these goals was by focusing on managed solutions in order to avoid large investments in engineering know-how. Another way was to deliver quickly to production by using the existing knowledge of his team.Domenico's biggest initial priority was to make a real-time reporting dashboard that collated data generated by third-party systems, such as call centers and front-of-house software solutions that managed bookings and transactions. (Before Domenico's arrival, all reporting had been done by aggregating data from different sources through an expensive, manual, error-prone, and slow process—which tended to result in late and incomplete insights.)Establishing an initial stack with AWS and a BI/analytics tool only took a month and required minimal DevOps resources, but Domenico's team ended up wanting to leverage their efforts to free up third-party data for more than just the reporting/data insights use case.So they began considering Apache Kafka® as a central repository for their data. For Kafka itself, they investigated Amazon MSK vs. Confluent, carefully weighing setup and time costs, maintenance costs, limitations, security, availability, risks, migration costs, Kafka updates frequency, observability, and errors and troubleshooting needs.Domenico's team settled on Confluent Cloud and built the following stack:AWS AppSync, a managed GraphQL layer to interact with and abstract third-party APIs (data sources)AWS Lambdas for extracting data and producing to Kafka topicsKafka topics for the raw as well as transformed dataKafka Streams for data transformationKafka Redshift sink connector for loading data​​AWS Redshift as the destination cloud data warehouse Looker for business intelligence and big data analytics This stack allowed the company's data to be consumed by multiple teams in a scalable way. Eventually, DynamoDB was added and by the end of a year, along with a scalable architecture, Domenico had successfully grown his staff to 45 members on six teams.EPISODE LINKSConfluent’s Data Streaming Platform Can Save Over $2.5M vs. Self-Managing Apache KafkaAccelerate Your Cloud Data Warehouse Migration and Modernization with ConfluentWatch the video version of this podcastKris Jenkins' TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
What is GraphQL? And how can you combine GraphQL with Apache Kafka® to query data in real time?With over 10 years of experience as a backend engineer, Gerard Klijs is a Confluent Community Catalyst, a contributor to several GraphQL libraries, and also a creator and maintainer of a Rust library to use Confluent Schema Registry with Java client. In this episode, he explains why you want to use Kafka with GraphQL and how they work together to bridge the gap between backend and frontend to make data more easily accessible in the frontend.  As an alternative to REST, GraphQL is an open source programming language developed by Meta, which lets you pull data from multiple data sources via a single API call. GraphQL lets you migrate and deprecate data easily. For example, if you have a `name` field, which you later decided to replace by `firstName` and `lastName`, you can group the field names together and monitor the server for query requests. If there are no additional query requests for the deprecated field, then it can be removed from the server.Usually, GraphQL is used in the frontend with a server implemented in Node.js, while Kafka is often used as an integration layer between backend components. When it comes to connecting Kafka with GraphQL, the use cases might not seem as vast at first glance, but Gerard thinks that it is due to unfamiliarity and misconceptions on how the two can work together. For example, some may think Kafka is merely a message bus and GraphQL is for graph databases.Gerard also talks about the backend for frontend (BFF) pattern as well as tips on working with GraphQL. EPISODE LINKSGetting Started with GraphQL and Apache KafkaKafka and GraphQL: Misconceptions and ConnectionsGerard Klijs GithubWatch the video version of this podcastKris Jenkins TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
Data availability, usability, integrity, and security are words that we sometimes hear a lot. But what do they actually look like when put into practice? That’s where data governance comes in. This becomes especially tricky when working with real-time data architectures.Tushar Thole (Senior Manager, Engineering, Trust & Security, Confluent) focuses on delivering features for software-defined storage, software-defined networking (SD-WAN), security, and cloud-native domains. In this episode, he shares the importance of real-time data governance and the product portfolio—Stream Governance, which his team has been building to fostering the collaboration and knowledge sharing necessary to become an event-centric business while remaining compliant within an ever-evolving landscape of data regulations. With the increase of data volume, variety, and velocity, data governance is mandatory for trustworthy, usable, accurate, and accessible data across organizations, especially with distributed data in motion. When it comes to choosing a tool to govern real-time distributed data, there is often a paradox of choice. Some tools are built for handling data at rest, while open source alternatives lack features and are not managed services that can be integrated with the Apache Kafka® ecosystem natively. To solve governance use cases by delivering high-quality data assets, Tushar and his team have been taking Confluent Schema Registry, considered the de facto metadata management standard for the ecosystem, to the next level. This approach to governance allows organizations to scale Kafka operations for real-time observability with security and quality. The fully managed, cloud-native Stream Governance framework is based on three key workflows: Stream catalog: Search and discover data in a self-service fashionStream lineage: Understand the complex data relationships with interactive, end-to-end maps of event streams Stream quality: Deliver trusted, high-quality event streams to the organization Tushar also shares use cases around data governance and sheds light on the Stream Governance roadmap. EPISODE LINKSStream Governance – How it WorksData Mess to Data Mesh | Jay KrepsDemo: Stream GovernanceData Governance for Real Time DataWatch the video version of this podcastKris Jenkins TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
How many messages can Apache Kafka® process per second? At Honeycomb, it's easily over one million messages.  In this episode,  get a taste of how Honeycomb uses Kafka on massive scale. Liz Fong-Jones (Principal Developer Advocate, Honeycomb) explains how Honeycomb manages Kafka-based telemetry ingestion pipelines and scales Kafka clusters. And what is Honeycomb? Honeycomb is an observability platform that helps you visualize, analyze, and improve cloud application quality and performance. Their data volume has grown by a factor of 10 throughout the pandemic, while the total cost of ownership has only gone up by 20%. But how, you ask? As a developer advocate for site reliability engineering (SRE) and observability, Liz works alongside the platform engineering team on optimizing infrastructure for reliability and cost. Two years ago, the team was facing the prospect of growing from 20 Kafka brokers to 200 Kafka brokers as data volume increased. The challenge was to scale and shuffle data between the number of brokers while maintaining cost efficiency.The Honeycomb engineering team has experimented with using sc1 or st1 EBS hard disks to store the majority of longer-term archives and keep only the latest hours of data on NVMe instance storage. However, this approach to cost reduction was not ideal, which resulted in needing to keep data that is older than 24 hours on SSD. The team began to explore and adopt Zstandard compression to decrease bandwidth and disk size; however, the clusters were still struggling to keep up. When Confluent Platform 6.0 rolled out Tiered Storage, the team saw it as a feature to help them break away from being storage bound. Before bringing the feature into production, the team did a proof of concept, which helped them gain confidence as they watched Kafka tolerate broker death and reduce latencies in fetching historical data. Tiered Storage now shrinks their clusters significantly so that they can hold on to local NVMe SSD and the tiered data is only stored once on Amazon S3, rather than consuming SSD on all replicas. In combination with the AWS Im4gn instance, Tiered Storage allows the team to scale for long-term growth. Honeycomb also saved 87% on the cost per megabyte of Kafka throughput by optimizing their Kafka clusters.EPISODE LINKSTiered StorageIntroducing Confluent Platform 6.0Scaling Kafka at HoneycombWatch the video version of this podcastKris Jenkins TwitterStreaming Audio Playlist Join the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)  
With experience in data infrastructure and distributed data technologies, author of the book “Designing Event-Driven Systems” Ben Stopford (Lead Technologist, Office of the CTO, Confluent) explains the data mesh paradigm, differences between traditional data warehouses and microservices, as well as how you can get started with data mesh.   Unlike standard data architecture, data mesh is about moving data away from a monolithic data warehouse into distributed data systems. Doing so will allow data to be available as a product—this is also one of the four principles of data mesh: Data ownership by domainData as a productData available everywhere for self-serviceData governed wherever it isThese four principles are technology agnostic, which means that they don’t restrict you to a programming language, Apache Kafka®, or other databases. Data mesh is all about building point-to-point architecture that lets you evolve and accommodate real-time data needs with governance tools.Fundamentally, data mesh is more than a technological shift. It’s a mindset shift that requires cultural adaptation of product thinking—treating data as a product instead of data as an asset or resource. Data mesh invests ownership of data by the people who create it with requirements that ensure quality and governance. Because data mesh consists of a map of interconnections, it’s important to have governance tools in place to identify data sources and provide data discovery capabilities. There are many ways to implement data mesh, event streaming being one of them. You can ingest data sets from across organizations and sources into your own data system. Then you can use stream processing to trigger an application response to the data set. By representing each data product as a data stream, you can tag it with sub-elements and secondary dimensions to enable data searchability. If you are using a managed service like Confluent Cloud for data mesh, you can visualize how data flows inside the mesh through a stream lineage graph. Ben also discusses the importance of keeping data architecture as simple as you can to avoid derivatives of data products.EPISODE LINKSData Mesh 101 courseData Mesh 101 with Live Walkthrough ExerciseIntroduction and Guide to Data MeshThe Definitive Guide to Building a Data Mesh with Event StreamsWhat is Data Mesh, and How Does it Work? ft. Zhamak DehghaniDesigning Event-Driven SystemsWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
What is serverless?Having worked as a software engineer for over 15 years and as a regular contributor to Kafka Streams, Bill Bejeck (Integration Architect, Confluent) is an Apache Kafka® committer and author of “Kafka Streams in Action.” In today’s episode, he explains what serverless and the architectural concepts behind it are. To clarify, serverless doesn’t mean you can run an application without a server—there are still servers in the architecture, but they are abstracted away from your application development. In other words, you can focus on building and running applications and services without any concerns over infrastructure management. Using a cloud provider such as Amazon Web Services (AWS) enables you to allocate machine resources on demand while handling provisioning, maintenance, and scaling of the server infrastructure. There are a few important terms to know when implementing serverless functions with event stream processors: Functions as a service (FaaS)Stateless stream processingStateful stream processingServerless commonly falls into the FaaS cloud computing service category—for example, AWS Lambda is the classic definition of a FaaS offering. You have a greater degree of control to run a discrete chunk of code in response to certain events, and it lets you write code to solve a specific issue or use case. Stateless processing is simpler in comparison to stateful processing, which is more complex as it involves keeping the state of an event stream and needs a key-value store. ksqlDB allows you to perform both stateless and stateful processing, but its strength lies in stateful processing to answer complex questions while AWS Lambda is better suited for stateless processing tasks. By integrating ksqlDB with AWS Lambda together, they deliver serverless event streaming and analytics at scale.EPISODE LINKSWhat is Serverless?Serverless Stream Processing with Apache Kafka, AWS Lambda, and ksqlDBStateful Serverless Architectures with ksqlDB and AWS Lambda Serverless GitHub repositoryKafka Streams in ActionWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
When it comes to Apache Kafka®, there’s no one better to tell the story than Jay Kreps (Co-Founder and CEO, Confluent), one of the original creators of Kafka. In this episode, he talks about the evolution of Kafka from in-house infrastructure to a managed cloud service and discusses what’s next for infrastructure engineers who used to self-manage the workload. Kafka started out at LinkedIn as a distributed stream processing framework and was core to their central data pipeline. At the time, the challenge was to address scalability for real-time data feeds. The social media platform’s initial data system was built on Apache™Hadoop®, but the team later realized that operationalizing and scaling the system required a considerable amount of work. When they started re-engineering the infrastructure, Jay observed a big gap in data streaming—on one end, data was being looked at constantly for analytics, while on the other end, data was being looked at once a day—missing real-time data interconnection. This ushered in efforts to build a distributed system that connects applications, data systems, and organizations for real-time data. That goal led to the birth of Kafka and eventually a company around it—Confluent.Over time, Confluent progressed from focussing solely on Kafka as a software product to a more holistic view—Kafka as a complete central nervous system for data, integrating connectors and stream processing with a fully-managed cloud service.Now as organizations make a similar shift from in-house infrastructure to fully-managed services, Jay outlines five guiding points to keep in mind: Cloud-native systems abstract away operational efforts for you without infrastructure concernsIt’s important to have a complete ecosystem for Kafka, including connectors, a SQL layer, and data governanceA distributed system should allow data to be accessible everywhere and across organizationsIdentifying a reliable storage infrastructure layer that is dependable, such as Amazon S3 is criticalCost-effective models mean sustainability and systems that are easy to build aroundEPISODE LINKSBuilding Real-Time Data Systems the Hard WayKris Jenkins TwitterThe Hitchhiker’s Guide to the GalaxyHedonic treadmillWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
Meet your new host of the Streaming Audio podcast: Kris Jenkins (Senior Developer Advocate, Confluent)! In this preview, Kris shares a few highlights from forthcoming episodes to look forward to, spanning topics from data mesh, cloud-native technologies, and serverless Apache Kafka®, to data modeling. As a developer advocate, Kris is endlessly fascinated about software design, functional programming, real-time systems, and electronic music. He is a veteran software developer and engineer, with a broad background from roles such as CTO of a Java/Oracle gold exchange and contract developer of several Haskell/PureScript-based event systems.There is still a raft of data streaming narratives to tell and many community experts to feature. We’ll cover what’s new and emerging, real-life Kafka use cases, and how people are currently using managed Kafka as a service, as well as the latest in the data streaming spaceIf there’s a subject you’d like to see covered on the show or if you know someone who should be featured, let us know via the Confluent Community engagement form. EPISODE LINKSGet involved in the Confluent CommunitySubscribe on Apple PodcastSubscribe on SpotifySubscribe on AndroidListen and Subscribe on PodLinkWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
After nearly 200 podcast episodes of Streaming Audio, Tim Berglund bids farewell in his last episode as host of the show. Tim reflects on the many great memories with guests who have appeared on the segment—and each for its own reasons. He has covered a wide variety of topics, ranging from Apache Kafka® fundamentals, microservices, event stream processing, use cases, to cloud-native Kafka, data mesh, and more. As Tim mentions, the Streaming Audio podcast will continue on to explore all things about Kafka and the cloud while featuring new voices and topics. You can subscribe to the Streaming Audio podcast on your podcast platform of choice to get the latest updates and news. Thank you for listening and stay tuned. EPISODE LINKSI Interviewed Nearly 200 Apache Kafka Experts and I learned These 10 ThingsWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
What is event sourcing and how does it work?Event sourcing is often used interchangeably with event-driven architecture and event stream processing. However, Anna McDonald (Principal Customer Success Technical Architect, Confluent) explains it's a specific category of its own—an event streaming pattern. Anna is passionate about event-driven architectures and event patterns. She’s a tour de force in the Apache Kafka® community and is the presenter of the Event Sourcing and Event Storage with Apache Kafka course on Confluent Developer. In this episode, she previews the course by providing an overview of what event sourcing is and what you need to know in order to build event-driven systems. Event sourcing is an architectural design pattern, which defines the approach to handling data operations that are driven by a sequence of events. The pattern ensures that all changes to an application state are captured and stored as an immutable sequence of events, known as a log of events. The events are persisted in an event store, which acts as the system of record. Unlike traditional databases where only the latest status is saved, an event-based system saves all events into a database in sequential order. If you find a past event is incorrect, you can replay each event from a certain timestamp up to the present to recreate the latest status of data. Event sourcing is commonly implemented with a command query responsibility segregation (CQRS) system to perform data computation tasks in response to events. To implement CQRS with Kafka, you can use Kafka Connect, along with a database, or alternatively use Kafka with the streaming database ksqlDB.In addition, Anna also shares about:Data at rest and data in motion techniques for event modelingThe differences between event streaming and event sourcingHow CQRS, change data capture (CDC), and event streaming help you leverage event-driven systemsThe primary qualities and advantages of an event-based storage systemUse cases for event sourcing and how it integrates with your systemsEPISODE LINKSEvent Sourcing courseEvent Streaming in 3 MinutesIntroducing Derivative Event SourcingMeetup: Event Sourcing and Apache KafkaWatch the video version of this podcastJoin the Confluent CommunityLearn more with Kafka tutorials, resources, and guides at Confluent DeveloperLive demo: Intro to Event-Driven Microservices with ConfluentUse PODCAST100 to get an additional $100 of free Confluent Cloud usage (details)
Comments 
loading
Download from Google Play
Download from App Store