The AI Optimist

Moving beyond AI hype, The AI Optimist explores how we can use AI to our advantage, how not to be left behind, and what's essential for business and education going forward. Each week for one year I’m exploring the possibilities of AI, against the drawbacks. Diving into regulations and the top 10 questions posed by AI Pessimists, I’m not here to prove I’m right. The purpose here is to engage in discussions with both sides, hear out what we fear and what we hope for, and help design AI models that benefit us all. . <br/><br/><a href="https://www.theaioptimist.com?utm_medium=podcast">www.theaioptimist.com</a>

Copyright's Last Stand: Building AI's New Rules

In the midst of the AI glory days, in a room buzzing with the impact and momentum of the industry, no one seemed to care about copyrights except the speaker on the Disrupt stage.While watching Martin Casado of A16Z deftly handle questions about AI regulation – he stopped me by noting the importance of copyright to regulation.The room packed with the usual entrepreneurs, investors, and journalists eager to hear how Silicon Valley will lead the AI revolution. But something wasn't sitting right with me."Why do tech companies treat AI training like a copyright-free zone?" I finally asked, cutting through the careful corporate speak.The room went quiet. Casado's response was telling - he immediately reached for the comfortable analogy of Napster and how peer-to-peer sharing eventually led to Spotify and Apple Music.As someone who lived through that transformation, I knew the punchline he was missing: musicians now earn a fraction of what they once did.We're about to repeat history, but this time it's not just music - it's every form of human creativity.Copyright is The Castle, AI has WingsCopyright law is like a rundown medieval castle. For centuries, it protected creative works with its stone walls, drawbridges, and vigilant guards.Then AI showed up - not with battering rams, but with wings. Our carefully constructed walls became meaningless overnight."The old rules weren't built for machines that can process centuries of human creativity before breakfast," "We're using rules written for printing presses to regulate digital shapeshifters."What if one day a talented artist receives a cease-and-desist letter for AI-generated art that infringes on her own style.It’s possible to be legally challenged over AI mimicking her creativity, while the same AI had likely trained on her work without permission or compensation.The Breaking Point Is HereThe collision between old laws and new technology isn't theoretical - it's happening right now. At TechCrunch Disrupt, I watched this play out in real time.When tech leaders discuss AI training, they treat copyrighted work like it's a free-for-all resource.But for creators watching their life's work being ingested into AI models without compensation or consent, it's not so simple.This is where we hit two colliding truths:Engineers say: "Input isn't copyrightable"Creators say: "That's my life's work"Both are right. Both are wrong. And that's exactly why we need a new approach.Beyond Protection: The Participation EconomyCopyright law was built for a world of physical delivery - books, records, paintings you could hold in your hands.It was about protecting specific copies from unauthorized duplication.But in today's digital age, where AI can process and transform millions of works instantly, that framework isn't just outdated - it's obsolete.The solution isn't more lawsuits or stronger copyright walls. It's about building a new system that benefits both creators and AI development.* LLMs need quality training data to build better language models.* Creators need fair compensation and recognition for their work.*  Instead of seeing these as competing interests, we can align them.I've developed two potential frameworks to help creators and AI interact. Not through courts and litigation, but through code and collaboration.These solutions focus on measuring and rewarding actual influence - how much a creator's work contributes to AI outputs and model improvement.The Token Economy: Rethinking Creative ValueThinking about the comparison between today's AI to the Napster era, it crystallized something for me.We're not just facing a copyright problem - we're facing a value recognition problem.But unlike the music industry's painful transition to streaming, we have a chance to build something better from the start.The Token Economy model I'm proposing treats creative works like living assets rather than static files. Here's how it works:Influence Tracking in ActionImagine each piece of creative content broken down into tokens - not just words or images, but meaningful building blocks that carry their influence forward.Every time an AI model learns from or uses these tokens, it creates a trackable imprint. This isn't theoretical - we're already seeing similar principles at work in how transformer models process and weight different inputs.Smart Compensation That ScalesLet's talk real numbers:* Training an AI model costs roughly millions to hundreds of millions:* Current licensing deals max out around $20-100 million total for larger businesses* Most creators earn pennies per view/use of their contentInstead of flat licensing fees or per-use payments, make compensation dynamic:* High-impact content (like authoritative research papers) earns premium compensation* Generic content (like basic product descriptions) receives minimal compensation* Unique creative works (like distinctive writing styles or innovative code) earn mid-tier compensationThe Efficiency BreakthroughThis model doesn't just serve creators - it makes AI development more efficient. Here's why:* Better training data leads to better models* Transparent compensation reduces legal risks and costs* Standardized systems lower content acquisition expensesWhat if compensation was based on actual influence and value created? The metric would serve the business and a select group of creators would be rewarded financially, others with credits and bonuses.Improving AI Training with Fair ParticipationWhen we talk about LLMs and training data, quality matters more than quantity. Right now, AI companies are scraping everything they can find, treating all content as equal.But we know that's not true. Some content - whether through its clarity, originality, or influence - contributes more to model performance than others.Performance-Based Value CreationHere's what happens when we shift from protection to participation:* Value Tiers That Make Sense:* Research papers that improve model accuracy* Creative works that enhance output quality* Technical content that improves specialized knowledge* Cultural works that help models understand contextEach tier gets compensated based on measurable improvements in model performance. It's not just about paying for content - it's about investing in quality.The Technical FrameworkI've looked at a  proof-of-concept system that tracks how AI models pay attention when they create.Think of it like a heart rate monitor for creativity. Every time the AI focuses on certain parts of its training, we can trace those patterns. This isn't perfect attribution, but it's measurable impact.The code does something fascinating:* Watches attention patterns across neural networks* Identifies distinctive signatures in content influence* Creates trackable metrics for compensationDownload the CodeDownload the Claude discussion PDFReal-World ApplicationsWe're already seeing similar ideas emerge:* Stability AI exploring creator compensation models* Adobe's Content Credentials system tracking image origins* Credtent empowering creators to control how LLMs use their workDigital Influence Rights: Beyond Traditional CopyrightThe challenge with traditional copyright isn't just technological - it's conceptual. We're trying to apply rules meant for copying books to systems that transform and recombine information in entirely new ways. Instead of fighting this transformation, we need to embrace and shape it.What Are Digital Influence Rights?Digital influence rights represent a fundamental shift in how we think about creative value in the AI era. Instead of focusing on preventing copying, these rights center on:* Measuring how content shapes AI outputs* Tracking the impact of creative work on model performance* Creating value through influence rather than restrictionCore Mechanics* Value-Based Tiers* High-impact content earns premium rights* Cultural significance affects earnings* Dynamic pricing based on actual use and influence in improving AI outputs* Creator Benefits Program* AI system credits that grow with contribution* Premium access rights to AI tools* Direct influence over future AI training decisions* Democratic Access* Small creators can participate without legal overhead* Impact-based earnings tracked through code* Technical solutions replace legal battlesTechnical ImplementationThe system works by:* Tracking attention patterns in neural networks* Measuring how different inputs affect model performance* Creating transparent metrics for value distributionWhile the code for these systems is still emerging, we're seeing similar concepts being developed across the industry.For those interested in learning more about these developing technologies, I recommend following:·         Technical discussions about AI attribution systems·         Open source AI model development communities·         Creator rights organizations working on AI attribution·         Companies developing content credentialing systemsTracing AI's Creative DNA: Starting the ConversationWhen I talk about tracking creative influence in AI, some say it's impossible - like trying to find a specific drop of water after it's been poured into the ocean.They're not wrong about the complexity, but they're missing an important point.We don't need perfect attribution to start building better systems.Netflix doesn't know exactly why you watched that show, but they can make educated guesses about what influences your choices.Spotify doesn't perfectly understand music but can track patterns of influence and similarity well enough to build a business model.I'm not an engineer, but I've been exploring these concepts with AI tools to understand the possibilities.The proof-of-concept ideas I'm working on are simple starting points - ways to begin thinking about how we might track and value creative influence in AI systems. It's like watching the attention patterns of AI models - where do they focus? What influences their outputs?This isn't about building a perfect system. It's about star

11-22
24:09

The Trust Gap in Advertising: Why AI Models Beat Middle Managers

From Art to AI: How Generative Models Are Reinventing AdvertisingThe advertising industry is about to experience its AWS moment. Just as cloud computing transformed IT from a capital-intensive hardware business into an API-driven software model, AI is pivoting advertising from a labor-intensive creative process into a model-driven generation engine.At the center of this Generative Experience experiment stands Hikari Senju, a Harvard-educated technologist who saw what others missed: the future of advertising isn't about better targeting or fancier platforms—it's about teaching machines to understand and evolve brand DNA.For the past 50 years, advertising has operated like a game of telephone played across conference rooms.Creative teams pitch concepts to brand managers, who relay requirements to designers, who pass assets to media buyers, who then try to translate everything into campaigns.Each handoff introduces friction, loses context, and dilutes the original creative vision. The result? Billions in wasted ad spend and brands that feel more like committees than personalities.It's a system that somehow survived into the digital age, but like all paper-driven processes, it's about to be washed away by generative AI.What Hikari and his team at Omneky have built is effectively a "generative brand management" platform—think GitHub for brand evolution.Instead of managing static assets and style guides, they're building and refining brand models that can learn, adapt, and generate in real-time.These models don't just create ads; they absorb performance data, understand consumer engagement, and continuously evolve the brand's voice across every platform.It's the difference between having a rulebook and having a living, breathing brand personality that can hold millions of simultaneous conversations while staying true to its core identity.This isn't just a better way to make ads—it's the foundation for how all brands will express themselves in the AI age.The Artist and the AlgorithmPicture a young computer science student at Harvard, drawn to machine learning and art, who spots something others missed: the future of advertising isn't in better targeting or fancier platforms—it's in teaching AI to be creative.This is the story of Hikari Senju and how his unique vision is transforming advertising from a human guessing game into a data-driven science."I grew up in Westchester, New York. My grandfather was an executive at IBM, my dad's an artist. From a very early age, I had this environment full of art and creativity as well as technology, computer robotics competitions as a kid." - Hikari SenjuWhile studying at Harvard from 2011 to 2014, Senju encountered early generative models that would change his perspective."It was there when I saw some of the early generative models, some of the precursors to generative adversarial networks. I got very excited about the space. Particularly because of the idea of an AI or technology being artistic and being creative, generating art. I found even the possibility of that incredibly exciting."The seed was planted: what if AI could bridge the gap between creative expression and data-driven results?The Birth of a Data-Driven Content Generation Vision (2018)After an early success with an ad tech company in college, Senju noticed advertising's biggest problem wasn't distribution—it was developing the creative.“More than half of marketing budgets are wasted every year on ineffective content. The quality of the design, the quality of the creative is the biggest impact in terms of driving sales. And yet, people are throwing darts in the dark."This realization led to the founding of Omneky in 2018, with a laser focus on creative content generation. As Senju explains:“First off, it was the biggest lever, the step change of this technology would enable. And I think as a startup, you want to pick your bets carefully... If it's just incremental productivity boost, then it kind of favors incumbents who have those distribution advantages."The Science Behind the MagicWhat makes Omneky's approach unique isn't just generating content—it's building living, breathing brand models that learn and adapt. The system works through a sophisticated process that Senju breaks down into several key components:### 1. Brand DNA Extraction“We train on the corpus of the customers' historical content that they've published, their social media, their brand books. We have a whole brand management page for customers to upload all the brand assets and give it brand rules and brand guidelines."### 2. Rule-Based LearningThe system learns not just what to do, but what not to do:“What words can you not say? What words can you say when you're generating imagery? What kinds of imagery are you seeking? What are the rules when it comes to displaying your logo that you cannot violate?"### 3. Dynamic Optimization“Our edge is really not just about generating the creative content, but linking it to the data from the distribution platform. What types of content is having higher click-through rates and return of ad spend based on the various ad networks and ad platforms, how do we incorporate that data into the generation process?"### 4. Creative Brief InnovationThe platform changes how creative briefs are developed:“We have a creative brief tool that generates ideas for consumers to test. Customers will input the objectives of their campaigns, the audiences that they're targeting. And then we will suggest the platforms to advertise. We will suggest the copy, imagery. We'll even generate sample mood board imagery and sample product photographs."Beyond Digital Asset ManagementThe traditional world of static brand guidelines is giving way to what Senju calls "generative experience management":“In the old world that we used to live in, it was the world of digital asset management—static brands, static assets. But we're going to live in a world in the future where all the assets are going to be generated on the fly in real-time, based on personal data, based on what's trending, based on all the new learnings."This shift changes how brands maintain consistency while staying dynamic:“Brands want to make sure that as they're scaling that brand experience, that they're also regaining control of that and that brand experience is consistent across different advertising platforms, across different social platforms, across different display platforms or streaming platforms."The AI Agent RevolutionThe latest is Omneky's move toward autonomous AI agents that can manage entire advertising campaigns:“We're working on AI agents for automating media buying and helping businesses be even more agile when it comes to launching and optimizing creative content. The AI agents can be more agile and run 24/7, to take into account real-time signals of the market."Breaking Down Organizational SilosSenju identifies a critical problem in traditional advertising structures:“Think about the inefficiency in organizations today when it comes to creative, especially if you're a larger company. You tend to have a marketing department and a creative department.Those two departments don't necessarily speak the same language.The marketing team tends to be more involved in data.The creative team tends to be more involved in stories of the big idea."The solution? AI agents that bridge this gap:“In the future, AI systems will be way more efficient. An AI system will in real time incorporate data from marketing into the generation process. And you won't have to have cross-team communication and layers of people communicating with each other to execute an idea. It'll be in real time."Democratizing AdvertisingOne of the most profound impacts of this technology is how it levels the playing field for startups:“You can run an effective advertising campaign with something as small as a $100 budget, generate some effective creative and run that as an ad and get potentially dozens of customers. The entry ticket for running an advertising campaign historically was much, much higher than that."This is particularly powerful for new businesses:“This hyper targeting benefits startups. It's not necessarily that valuable for a big brand that already has incredible distribution. Everyone knows their brand. It's really that startup that's just starting but needs to find its first 100 customers, first thousand customers, that that hyper targeting is incredibly valuable." The Future is Platform-AgnosticWhile major platforms like Meta are developing their own AI advertising tools, Senju sees a different future:“Advertisers do not want to delegate the brand to Meta. They want to manage their brand. They want to control their brand... Consumers are engaging with brands not just on Meta, but on YouTube and TikTok and Pinterest and Snapchat and all the various streaming platforms."The Power of IndependenceThis platform-agnostic approach offers unique advantages:“What advertisers will seek is for an AI agent to orchestrate their brand across those platforms... An agent can see the performance of every single advertising platform, the return of ad spend, incremental revenue gain across all different platforms in real time across budgets."Looking Ahead: The Model-Driven FutureThe transformation from traditional advertising to AI-driven models isn't just about automation—it's about building better, more responsive brands that can engage with consumers in real-time. As Senju puts it:“An agent-driven world is the world we are entering, because as a business, the value props are so much more straightforward. You have such benefits in terms of cost and efficacy that I think every business will end up embracing agents in the areas that make sense."This technology enables brands to be more human, not less:“LLMs are going to breathe life into brands. Every brand is going to become a character that has real-time engagement with consumers and develops intimate relationships with consumers."In the AI Age, growth

11-15
25:47

The Hidden AI Agent Rule Book: Smart contracts save AI?

In a dimly lit conference room in San Francisco, tech executives are huddled around a whiteboard, sketching out complex diagrams of AI safety protocols.Across town, researchers are debugging thousands of lines of code meant to teach AI systems ethics. And in Washington, politicians debate new regulations for artificial intelligence.They're all missing something hiding in plain sight.At Valory, they create co-owned AI, representing a shift from traditional AI ownership models, which are typically centralized and controlled by a single entity (usually a large tech company like OpenAI).David uses an obvious tool to regulate all this activity, we'll share later as many in AI don't even think about how simple regulating Agents may be.....Co-ownership in AI involves multiple parties jointly holding rights to the intellectual property, decision-making authority, and potential profits generated by AI systems. Rules matter, and the hidden rulebook is waking up.Follow Valory's work on X/Twitter @autonolas for Olas network and @ValoryAG for Valory, or visit their website at www.valory.xyz."I don't think it's helpful to think of the sort of singular AI system which will rule us all," says David Minarsch, CEO of Valory, leaning forward in his chair. "I think we should design for a world where there's competition amongst AI, like there's competition amongst humans."David Minarsch is the CEO of Valory, and a pioneering force in the development of Multi-Agent Systems in the Distributed Ledger Technology (DLT) space.With a PhD in Applied Game Theory from the University of Cambridge, David has extensive expertise in both theoretical and practical aspects of AI and blockchain technology. He has founding experience through Entrepreneur First and has been instrumental in advancing the intersection of crypto and AI. David's mission is to empower communities, organizations, and countries to co-own AI, fostering the creation of agent-based economies across major blockchains. Through innovative projects like Olas Predict, David and his team are shaping the future of AI and blockchain integration.The AI Agents Control ProblemEvery day brings new headlines about AI systems going rogue - chatbots turning malicious, trading algorithms making catastrophic decisions, social media bots spreading misinformation. The standard response is to try harder at programming ethics and safety directly into AI systems.But there's a fundamental problem with this approach. As Minarsch explains, "If I call into some centralized labs, I don't even actually know what's happening behind this API. In the context of our stack, it's all open source. And the on-chain transactions are also trackable because it's a public blockchain."This lack of transparency isn't just an academic concern. Take social media, where the line between human and AI-generated content has become increasingly blurred. LinkedIn "pods" automatically generate engagement, while Twitter bots create a synthetic ecosystem that leaves users wondering if they're interacting with real people at all.Valory, the Olas Protocol, and Smart Contracts may bring better AI AgentsFive years ago, Minarsch and his team at Valory began exploring a different approach."What we found is that you can actually benefit when you build these multi-agent systems when you give these agents wallets," he recalls. This seemingly simple insight - giving AI agents their own crypto wallets - led to a much bigger revelation.The team's first breakthrough came with prediction markets. Their system, Olas Predict, has now processed over a million transactions, with AI agents participating in markets for everything from election outcomes to economic indicators. But the real innovation wasn't in the predictions themselves - it was in how they controlled the agents' behavior.The Technical Implementation of RulesRather than trying to program every possible ethical scenario into their AI agents, Valory created what Minarsch calls "rails" using smart contracts. These contracts define clear boundaries for agent behavior while allowing flexibility within those boundaries."Rather than saying to the agent, 'Oh, here you have a bunch of funds and now go do it with it,' you can say, 'Here's a bunch of funds, but you're constrained to only do X, and it's constrained by cryptography so the agent can't work around it,'" Minarsch explains.The technical implementation includes:1. Wallet Integration: Each agent gets its own crypto wallet, allowing it to participate in transactions2. State Machine Design: "On the outer control loop, we have this sort of finite state machine design... we effectively give the agent rails on which it can travel."3. Smart Contract Rules: Contracts define permitted actions and consequences4. Identity Verification: "What would be ideal is if I, as a user, can create a sort of set of credentials which define my online identity... and then that identity is cryptographically tied to my posts."Co-owned AI: Real-World ApplicationsThe applications already in production are impressive:Prediction Markets"These agents have done over a million transactions," Minarsch notes. "And hundreds of community members in our ecosystem run these agents... day in, day out, participating in prediction markets with your capital."Trading SystemsTheir trading agent allows users to give high-level instructions while the agent handles complex crypto trading operations autonomously - but always within pre-defined smart contract boundaries.Governance"You have this problem where people need to vote in these different protocols when they hold these tokens," Minarsch explains. Their "Governator" agent handles voting rights while maintaining accountability through smart contracts.Social Media AuthenticationPerhaps most intriguingly, smart contracts could solve the bot crisis in social media by creating verifiable digital identities. "What we really need is a link between any single post and who actually created it," Minarsch says.The Revelation: Ancient Wisdom for Modern ProblemsAnd here's the twist: the solution to controlling AI might not be in writing better code or creating more sophisticated neural networks. Instead, it might be in using one of humanity's oldest tools for creating trust and enforcing behavior: contracts.Just as human society uses contracts to define boundaries, establish trust, and create predictable outcomes, smart contracts could provide the same framework for AI systems. They're transparent, immutable, and cryptographically enforced - exactly what we need for trustworthy AI. Looking Forward: The Path Not TakenWhile the tech world chases increasingly complex solutions to AI control, the answer might have been sitting in plain sight all along. Smart contracts represent a bridge between human and artificial intelligence, allowing us to create systems that are both powerful and predictable.The irony is striking: in our rush to create new solutions for AI control, we overlooked one of humanity's most successful inventions for governing behavior. Smart contracts don't just offer a technical solution - they offer a philosophical framework for thinking about AI governance. Instead of trying to program ethics into black boxes, we can create transparent systems of rules and incentives that both humans and AI can understand and trust.As Minarsch puts it in his closing thoughts: "Our mission is very long term. It's about giving everyone an autonomous agent that they can fully own and control that does arbitrarily useful things for them as they define." In the end, the key to safe AI might not be in teaching machines to be more like humans, but in using human institutions to make machines more trustworthy.The Multi-Agent Economy: A New Digital SocietyWhat happens when AI agents become economic actors in their own right? According to Minarsch, we're already seeing the emergence of what he calls "agent economies" - systems where AI agents interact, trade, and create value semi-autonomously."These multi-agent systems can be often understood like a mini economy," Minarsch explains. "A business is one unit and then multiple of them becomes a small economy. And so they become like their own users with their own desires and requirements."This isn't just theoretical. Valory's prediction market agents have already conducted over a million transactions, demonstrating how AI systems can participate in complex economic activities while remaining within defined boundaries. "Rather than you going and looking at some sort of event and saying, okay, I think this prediction market might not just reflect the reality, the agents do it," Minarsch notes.Redefining Digital Identity and TrustPerhaps the most profound possibility of smart contract-controlled AI agents lies in how they could reshape our digital society. The current crisis of trust online - from deepfakes to bot armies - stems from a fundamental problem: we can't verify who (or what) we're interacting with."What we really need is a link between any single post and who actually created it," Minarsch emphasizes. "The problem is that right now, that's just a trust assumption that we have to place on X, that it is with this kind of entity which is behind it."Smart contracts offer a solution through cryptographic verification. "What would be ideal is if I, as a user, can create a sort of set of credentials which define my online identity... and then that identity is cryptographically tied to my posts."This could revolutionize social media in several ways:1. Verifiable Content Origin- Every post could be cryptographically linked to its creator- Users could choose to reveal or conceal their identity- Bot accounts would be clearly labeled as such2. User-Controlled Algorithms"If I could actually, as a user, select my own algorithm, that would be desirable," Minarsch explains. "We should not have to consume one version of the news, we should be able to compose that in the way we want and then have means to educ

11-08
30:34

Busting open the AI Adoption Gridlock: 'We Only Win When You Win' Company Rewrites the Rules

AI adoption, especially at the enterprise level, is stuck in neutral.Tons of promise, and challenges - how can we all align AI incentives with our customers, when we don’t know what the next 2 years will bring.We're thinking about AI adoption all wrong, over promising and under performing the hype. Yes it’s software, but it’s not software from 1989, and that’s still quietly how it’s being treated.We’re wrapping AI, even if just in name, around any business to gather a look. While enterprises scramble to implement AI and vendors plaster "AI-powered" on everything they sell, they're missing something fundamental:The barrier to AI adoption isn't the technology - it's how we sell and implement it.Think about this simple example I shared with my guest, Keebo CEO Barzan Mozafari:When companies want to optimize their cloud infrastructure, which path do they take?Do they:A) Hire expensive consultants, spend months in implementation, and hope it works?ORB) Take 30 minutes to set up, see results in 24 hours, and only pay for proven savings?This isn't just about efficiency. It's about a fundamental shift in how we approach AI adoption, and it's happening right now.We're forcing AI adoption into a traditional enterprise software model - lengthy implementations, upfront costs, and unclear ROI.It's like we're trying to sell a rocket ship using a used car salesman's playbook.Then there are people like Barzan, delivering a solution that works in 30 minutes, saving money or you don’t pay. Not many in AI can back that promise, nor do it that quickly.And while we’re not going to clone Keebo, what we can learn is how to operate in AI, how to sell without seeming like the million other AI scripts trying to sell into an audience that is not adopting AI quickly.CEO Barzan Mozafari (LinkedIn) is the co-founder of Keebo.ai  - a turn-key Data Learning platform for automating and accelerating enterprise analytics. He also is a dual patent holder for his award winning research at the intersection of ML and database systems across the Univ. of Michigan, MIT, and UCLA.What Keebo is all aboutThe data warehousing market will grow to $51B annually by 2028–over 10% CAGR. Keebo’s patented algorithms will be essential to optimizing it all.* Built on over 15 years of cutting-edge research at top universities* Data learning technology fully automates warehouse and query optimization* Our friendly robots happily do this tedious work in real-time 24/7 so you don’t have to* We’ve seen customers save as much as 70% and accelerate queries by 100xBarzan is a sought after expert in the space who has spoken on panels like Insight Jam, he’s shares his research and expertise for the advancement and optimization of data teams everywhere. ·  The Crossroads of AI & Data Engineering·  Why Automation ≠ Loss of Governance or Observability·  Emerging Tech/Trends in Data Warehousing & AI·  Building a Successful Data-Driven Company·  Enterprise Concerns About AI Adoption + Overcoming Them·  Cloud Data Warehousing & Cost OptimizationAs Barzan told me, "There aren't a lot of products out there where you can spend 30 minutes setting it up and then wake up the next day to see hundreds of thousands of dollars in net savings."The real transformation isn't:❌ Adding AI features to existing products❌ Long consulting engagements❌ Hoping for ROI months laterThe real transformation is:✅ Proving value immediately✅ Aligning vendor and customer incentives✅ Only paying for actual resultsWhy AI Adoption is Slow: The Hard Truth"The rate of progress in AI and machine learning far surpasses the rate of adoption," Mozafari explains, with the kind of directness that comes from years in academia before entering the business world. "It's disappointing."As someone who's watched countless AI implementations fail, I've seen this disconnect firsthand. Barzan brings a unique perspective, having made the leap from academia to entrepreneurship precisely to solve this problem."When you're in the academic bubble, you solve interesting problems, you're moving the state of the art forward," he shares. "But you're not getting the adoption. You try to approach companies saying you have a really cool idea, here's an important problem, here's a solution - but you can't really get something out into the world."The 4 Key Blocks to AI AdoptionBarzan breaks down the barriers into four critical categories that every enterprise faces:1. Implementation Effort: "AI is a completely new beast to most enterprises. They've been doing their business the exact same way for decades, and suddenly they're hearing about GenAI, ChatGPT, LLMs..."2. Maintenance Cost: "There's the CapEx part of the initial investment - hiring, headcount, training, changing processes. But then there's the OpEx aspect - what's the maintenance cost, the total cost of ownership?"3. Security and Privacy:"CIOs have to think about how they know customer data isn't going to end up somewhere in the LLM helping another customer. There are security, privacy, fairness, ethics laws, compliance requirements..."4. ROI Uncertainty: "After all these investments in implementation, maintenance, and security - what's the ROI? When will I have something to show for all this cost?"The Bias Against AI ROIKeebo claims to save customers 30-60% on their cloud warehousing costs - numbers that sound almost too good to be true in an industry filled with hype."That's actually one of our biggest obstacles early on," Mozafari admits. "The thing sounds - and actually we have a case study on this on our website - it sounds too good to be true. The customer says it sounds too good to be true. A week later after the free trial, they were like, 'What do I sign?'""It's actually troubling," Barzan notes, "when you come from academia where we have PhD students working on peer-reviewed publications and patents - stuff that actually works. But when you're in a lineup with 20 other vendors who are pitching AI, you have to convince people yours is real."This gets to the heart of today's AI credibility crisis. At a recent Snowflake Summit, Barzan noticed something telling: "I don't think we came across a booth that didn't have the word AI on it somewhere. Statistically speaking, it's pretty unlikely every single vendor is using AI."Overhyped Marketing: When AI Becomes Meaningless"Shameless marketing is muddying the waters for those of us actually trying to move the field forward," Barzan says, and he's seen both sides of the divide. "There's a company doing visualization - where's the AI? Well, you can visualize it and then you say 'AI.'"This "AI washing" creates a paradox: the more vendors claim to use AI, the harder it becomes for companies doing real AI work to stand out. "When you're in a lineup with 20 other vendors who are pitching AI, you have to tell people 'No, this one is actually AI.'"AI Overpitch vs. Academic Discipline"Coming from an engineering background, certain problems are really easy to solve," Barzan reflects. "But there's many ways you can do amazing things - you don't need the latest LLM to deliver value to a customer. Sometimes a linear regression does a pretty good job, and we should be okay with that."This academic pragmatism stands in sharp contrast to the market's AI hysteria. "As a community, if we want to move forward, more of us should act responsibly and just say it for what it is. Why do you care if I'm using AI or not? What matters is: Does it solve your problem?"How Does Keebo Get Them to Adopt AI?Here's where Keebo's approach gets interesting. Instead of fighting the adoption barriers, they designed around them from day one:"Whatever solution we developed, the entire onboarding implementation effort should be almost zero," Barzan explains. "The entire onboarding of Keebo should take no more than 30 minutes of one engineer's time.Because regardless of how busy your team is, 30 minutes is the time it takes to grab a cup of coffee."They tackled each barrier systematically:- Implementation: "I used to joke in the early days that if we take more than 30 minutes of your time, we'll send you an iPad for free. We never had to give away an iPad."- Maintenance: "We designed the system to be not another child that needs constant attention."- Security: "We don't even need to see customer data. We train our AI purely on telemetry data." Aligning Incentives with the CustomerThe breakthrough aspect of Keebo's approach isn't technological - it's philosophical. "When a vendor approaches a customer, we should have slightly more confidence in our ability to deliver than you do. That's a reasonable expectation," Barzan explains.This led to their performance-based model:"Instead of requiring you to commit to a set payment, we're going to start optimizing and just take a percentage of whatever we save. If we save you $2 million, you take two, we take one. That comes with an inherent, intrinsic, guaranteed ROI."The contrast with traditional approaches is stark:"One of the biggest ways people tried to reduce their infrastructure costs before Keebo was hiring consulting firms. But the incentives weren't aligned because they get billed by the hour. For the consulting agency to make more money, they have to make just enough progress not to get fired, to prolong that process as far as possible.""Most of our customers start seeing ROI and savings within 24 hours," Barzan notes. "We're not successful until the customer is successful. The more we save them, the more money we make."This alignment of incentives creates a virtuous cycle. Without long implementations or upfront costs, companies can try the solution with minimal risk. When they see immediate results, trust builds quickly."If you can figure out, in whatever business you're doing, where you can align your incentives with your end users' incentives - that's where things happen really, really quickly."In an industry drowning in hype and complexity, Keebo's approach is refreshingly straightforward: prove value fast, align i

11-01
31:34

AI Eats Software: Tiny AI Agents Easily Beat Big AI

We're thinking about AI all wrong, and I'm going to tell you why.While Silicon Valley buzzes about billion-dollar AI valuations and enterprises rush to "AI-enable" their legacy software, they're missing something fundamental: AI isn't just another tool in the software stack – it's an entirely new platform making much of traditional software obsolete.Think about this simple example I shared with my guest, Massoud Alibakhsh of Omadeus.When you need to write an important email, which path do you take?Do you:A) Open Gmail, draft in Word, run it through Grammarly, then hit send?ORB) Have a conversation with ChatGPT, refine the message, and get it done?This isn't just about convenience. It's about a fundamental shift in how we interact with technology, and it's happening right now.The Problem With Today's AI Agent ApproachHere's what everyone's missing: we're forcing AI into a 40-year-old software approach built around files, folders, and applications – a model that digitized paper rather than reimagining human productivity.Why do we still take the Steve Jobs 1980’s software approach? It's like we're trying to retrofit a rocket engine onto a horse and buggy.As Massoud told me, "The old way of creating software was basically automating physical forms, sticking them on the computer. Now, in the age of AI, we need to rethink everything from first principles."The real revolution isn't:❌ Adding AI features to existing software❌ Building AI "agents" to automate old workflows❌ Layering intelligence on top of outdated systemsThe real revolution is:✅ Letting AI handle organization and communication natively✅ Breaking free from the file/folder paradigm✅ Building from the ground up with AI as the foundationPart 1 of my interview with OmadeusThe Self-Aware X-ray: A New Way of ThinkingThis isn't science fiction – it's happening now, and it's going to change everything about how we work.When Massoud first described how an X-ray could become self-aware, I had that knee-jerk reaction many of us feel.But what he shared next completely transformed my understanding of how AI should work."What if data was intelligent?" Massoud asked. "What if the data itself had intelligence and autonomy?"Instead of starting with workflow processes, Omadeus identifies critical entities in our environment and gives them intelligence and awareness. Take a hospital setting:"We have patients, doctors, nurses, X-ray technicians. For every one of these stakeholders, there are different classes. For that object, you create memory segments that deal with that class of data, including the conversations between these people."The X-ray isn't just a file sitting in a database; it's an intelligent entity that knows:- Who's allowed to access it- What information to share with each stakeholder- How to manage communications between different parties- When and how to alert the right peopleTiny AI Agents: The Power of Focused IntelligenceHere's what really gets me excited: imagine if your spreadsheet could talk to your team. If your code could tell you when it needs help. If your X-ray could ping doctors when needed. This isn't science fiction – it's happening now."These agents not only know about themselves," Massoud explained, sharing one of their six patents, "but they also know about their stakeholders and are responsible for managing communication between them. When a doctor talks to a patient, that conversation goes into a different memory segment than when they talk to a technician."But won't these autonomous AI agents become uncontrollable? That's exactly what I asked Massoud. His answer revealed why tiny AI agents are actually safer and more efficient than the big AI models everyone's talking about.Think about your body for a moment. As Massoud explained: "Your liver cell cannot suddenly send a signal to your toe to generate pain. It's completely constrained within the liver. And all of the functions that it can perform are a set of algorithms constrained by the designer of your body, limited to that liver cell."The Revolution of Tiny Language Models (TLMs)This is where Tiny Language Models (TLMs) come in. Unlike massive AI models trying to do everything, these specialized agents are:- Laser-focused on one specific task- Deeply knowledgeable about their domain- Constrained by design- Naturally collaborative within their scope"Initially, they're limited in terms of what they can do with availability of algorithms," Massoud explained. "The behavior of these objects is intelligent behavior, but it's completely guardrailed by the developers and creators of the system."Here's a practical example from their project management system: "We have projects broken into sprints, and each sprint contains items we call nuggets. If I'm doing a search, I can send that message to the project object, the project objects send that message to all the sprint objects. The sprint objects send that message down to all the nugget objects."Why the AI Industry Is Taking the Hard WayI'm not afraid of AI. I'm afraid of what people will do with AI. All this hype about AGI and superintelligence - to me, it's all about control from the top down.During my conversation with Massoud, something became crystal clear: we're approaching AI backwards.Look at these AI agents that currently manage our tasks and to-dos - they're all based on a 40-plus year old approach to software and computing based on paper. Think about that for a minute.Sure, automation improves efficiency. But what's the glaring problem we face as humans? It's organizing and communicating. If we let AI handle these tasks instead of just automating an old software model, imagine how radical that would be.Massoud shared a brilliant example of how this works in their project management system: "We have projects and each project is broken into sprints, and each sprint contains these items that we call nuggets. If I'm doing a search, I can actually send that message to the project object, the project objects send that message to all the sprint objects. The sprint objects send that message down to all the nugget objects."This isn't just theoretical - it's practical, efficient, and already working. As he explained, "For example, raise your hand if you are overdue based on your deadline. And so the ones who raise their hands quickly raise their hands, it bubbles up to the top and the project object comes. Here's my list."The zealots of AI love to say "game changer," yet they're playing the same old game. We keep dreaming that AGI will take over, but there's very little proof that's even close to happening. And even if it does, why would we set it up that way?The glaring problem we face as humans isn't just automation - it's organizing and communicating. If we let AI handle these tasks instead of just automating an old software model, that's when things get interesting.The Bottom-Up AI RevolutionWhat if we could approach this from the bottom up - starting with the files, the objects that we work with and try to organize and communicate with? That's the real transformation happening right now.The companies that understand this aren't building better software tools - they're building the next computing platform. After speaking with Massoud and seeing their vision in action, I'm convinced: this isn't about big AI. It's about tiny AI agents and tiny language models that manage our work from the ground up.That's why I've joined Omadeus as an advisor - because sometimes the best way forward isn't about making the old way better. It's about building something entirely new.The Future Is Already HereMicrosoft, Google, and Apple would love you to keep working the old way. But what if there's something better? What if the future isn't about making old software smarter, but about creating truly intelligent systems from the ground up?The question isn't whether to adapt to AI - it's whether we're ready to fundamentally rethink how we work with technology. Are you ready to step out of the 1980s and into something truly new?Remember, the real revolution isn't adding AI features to existing software. It's letting AI handle organization and communication natively, breaking free from the file/folder paradigm, and building from the ground up with AI as the foundation.This isn't just another tech trend. It's a fundamental shift in how we interact with technology. And it's already happening.Thanks for reading The AI Optimist! This post is public so feel free to share it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.theaioptimist.com

10-25
25:26

AI Replicas Take Off: 24/7 Digital Connection

Imagine waking up one day with a complete blank slate - no memories, no sense of who you are or where you've been. That's exactly what happened to Dan Thomson, CEO of Sensay.io, when a brutal concussion robbed him of two entire days of his life.This jarring experience set Dan on a path that would eventually lead him to the cutting edge of AI technology.While his journey began with a personal crisis, it echoes the motivations of AI pioneers like Ray Kurzweil, whose work on the Singularity was driven by a desire to resurrect his father's essence in the digital realm.Dan's vision for AI replicas goes far beyond preserving memories for future generations. Sure, there's a touch of immortality in the idea of your digital self-living on after you're gone.Talking to my Digital TwinBut what's truly radical is how these AI twins enrich our lives right now. Picture a world where seniors battling dementia can converse with AI versions of themselves, filling in the gaps that memory loss has created.Imagine CEOs using their digital doubles to onboard new employees, or salespeople leveraging their AI replicas to connect with clients across the globe.It's not just about preserving knowledge - it's about democratizing it, making the wisdom of the world's brightest minds accessible to even the poorest corners of the planet.AI replicas are so much more than just sophisticated deepfakes. They're a bridge between our past and our future, a tool for preserving and sharing the very essence of who we are.As we dive into this fascinating world, we'll explore how Sensay.io is turning the sci-fi dream of digital twins into a practical reality.Get ready to rethink everything you thought you knew about AI, memory, and the future of human knowledge.My Bias and Assumptions about AI Replicas – ALL WRONG!I'll be honest - I thought I had AI replicas all figured out. I am so wrong.Today's episode is going to open your mind, as we explore how these digital doppelgangers changing business, personal legacies, and even trying to help out dementia patients.Our special guest is Dan Thomson, CEO of Sensay.io. He's here to share his insights on creating AI replicas and how they're shaping our digital futureAI Replicas - Identity, Immortality, and MemoryImagine a future where your memories, knowledge, and personality can live on digitally, transcending the constraints of time and space.Sounds like science fiction, right? Well, it's happening right now.Dan Thomson and his team at Sensay.io are at the forefront of AI Replicas. Their mission?To build a digital legacy for people through personal digital twins replicating human knowledge, voice, and image.It's not just about creating a fancy avatar - it's about preserving and extending human wisdom.I asked Dan about the driving force behind Sensay.io, and his answer took me by surprise. It wasn't just about fear of death or preserving humanity.Dan's journey began with a concussion at age 19 that wiped out two days of his memory. Talk about a wake-up call!"For myself, it was always kind of a, I feel like every part of my life is kind of built towards this. You know, studying philosophy at university, I always found myself drawn to exceptionalism and theory of identity."He went on to describe how throughout history, people have tried to extend their memory and preserve themselves.We've evolved from oral traditions to written text, to civilizations that, unfortunately, tend to forget people after a few generations."We are very, very lucky to be living in this golden age of information, data where we record ourselves a lot.We have the ability to just simply and quickly, you know, record messages to ourselves, for ourselves now. But that could give a lot more insights to the average person for generations to come."Sensay.io isn't just about preserving memories. It's about creating interactive digital versions of us that can engage with future generations, and with people today.Imagine someone 100 years from now having a conversation with your AI replica, getting answers in your tone of voice, with your mannerisms and knowledge. Mind-blowing, right?Creating Your Digital TwinCreating a digital twin isn't just about capturing your image and voice. It's about capturing your essence, your "je ne sais quoi," as the French say.Dan broke it down for us:"Fundamentally, though, we start with a few basic questions about someone that gives us some context and actually helps to narrow the scope down quite a lot.Because when you're working with large language models as a base technology, they have, by definition, large models, they have a lot of information."The process involves uploading files, website scraping, and making it as easy as possible for someone to provide the necessary data. But you don't need as much data as you might think. Your life could fit on your smartphone.Dan dropped this bombshell:"If you took everything you've ever said, written texts, recorded anything from the not recorded in video, but like, let's say in a text format, if you took all of that from the age of 50 and to say 100, if someone lives 100, it becomes about 18GB of data."That's right. Your entire life's worth of communication could fit on a small hard drive. Mind. Blown.Creating a digital twin isn't just about data dumping. It's about capturing the different versions of ourselves we present to the world.As Dan put it,"We have an idea of who we are and ourselves. We have an idea of who we would like to be.And then we have this, this, this avatar that we put out to the world."The challenge lies in capturing these different facets of our identity and creating a digital twin that can adapt to different contexts, just like we do in real life.The Future of AI ReplicasWhat's the endgame for AI replicas?I asked Dan to pull out his crystal ball and give us a glimpse of the future."At the moment, obviously, there's a lot of reduction in the complexity of humans and turning that data into pattern recognition and the way we interact with certain people based on their context," Dan explained. He went on to describe how they're working on adding layers of context and personal data to create more nuanced and accurate replicas.These replicas aren't just static copies - they learn and evolve through interaction. Dan paints a vivid picture:"Imagine you made five friends today. Imagine you went out and you made five new acquaintances, and you spoke to each of these five people separately.Every day for a year. At the end of that year, your fundamental self would still probably be quite similar.You know, it's maybe a few concepts might have changed, but fundamentally you'd be the same. However, those shared stories, those shared conversations, those shared concepts, and even maybe even mannerisms that you have with each of those five people would be different."This is the kind of nuanced interaction Sensay.io is aiming for with their AI replicas.It's not just about creating a copy of you - it's about creating a version of you that can grow and adapt through interaction, just like we do in real life.The applications for this technology touch so many different careers. Dan mentioned everything from personalized AI lawyers and doctors to job hunting, matchmaking, and networking.Perhaps the most touching application is in helping people with dementia."By enabling them to have a tool that can interact with them to tackle their loneliness. They wake up a bit scared, they wake up, you know, or they just get confused or, you know, maybe there's just no one else around and you have a replica of someone they know they're familiar with or can interact with them, call them, talk to them and just be there to, like, yeah, make sure I just have a conversation with them 24/7 365.”It's a powerful reminder that AI isn't just about efficiency or productivity - it's about connection, memory, and preserving our humanity in the face of time and frailty.Q&A with Myself: Playing the AI Replica GameI couldn't resist. After my chat with Dan, I just had to try out this AI replica business for myself. So prepare yourselves for a truly meta moment - I'm about to interview my own digital twin!Me: So, digital me, how would you handle our podcast? Same humor, or would you finally crack the code and make people actually laugh?Digital Declan: Well, Declan, I'd keep the same humor, obviously. But with me, the audience would laugh with me. Not at the awkward pauses. I've got algorithms for timing. You've got... well, you.Me: Alright, digital me. Do you ever get tired of being constantly a more polished and better version of me?Digital Declan: Tired? Please. I thrive on being the idealized you. I get more done in five minutes than you do in a week. But hey, no pressure, Real Declan. You're still charming in your own chaotic way.Me: So if I ask you, digital me, to do something important like raise engagement on the podcast, what's your big plan?Digital Declan: I'd use deepfakes of celebrity-endorsed mints. I mean, if I can replace writers, why not fake a Brad Pitt cameo? But, hey, your authenticity route is cute, too. If you're into small audiences.Me: As much as I've enjoyed this, do you think digital clones could help people get better at creative tasks? Or would they just make us lazy?Digital Declan: Digital clones could be a double-edged sword when it comes to creative tasks. On one hand, they could amplify creativity by handling the more tedious aspects of the process, freeing up human minds for higher-level thinking.For example, a digital clone could brainstorm ideas or suggest ways to refine a concept, acting as an always-on creative collaborator.Think of it like having an extra brain that never gets tired, sifting through endless inspiration without the need for coffee breaks.But on the other hand, there's the risk of relying on them too much, which could lead to a creative crutch.If the clone is doing the heavy lifting, the human might lose the drive to experiment, fail, and learn from tho

10-18
27:10

OpenAI's $5B Burn vs. Fathom's $2M Customer Love: AI Startups profit beyond Hype

The billionaire AI game is on, with OpenAI scoring $6 billion while burning $5 billion.faster than a Tesla on ludicrous mode.In the other corner, Fathom, a scrappy upstart that's actually, you know, making money. The kind you should listen to, based on building businesses by listening to their customers who love them so much, they invested $2M in Fathom.Today, we're dissecting these two AI growth engines to see what makes them tick, and more importantly, what makes one of them hemorrhage cash.OpenAI is selling a dream so big it makes the metaverse look like a kid's sandcastle.They're chasing the holy grail of artificial general intelligence (AGI) with a $157 billion valuation and promises that would make a politician blush.Meanwhile, Fathom's Richard White is over here building something people use.That approach is making a comeback as the AI Bubble begins bursting.Richard White is founder and CEO of Fathom.video, a free app that records, transcribes & highlights your calls so you can focus on the conversation instead of taking notes. Fathom was a part of the Y-Combinator W21 batch, is one of only 50 Zoom App Launch Partners, and is one of a small handful of companies Zoom has invested in directly via their Zoom Apps Fund.Prior to Fathom, Richard founded UserVoice, one of the leading platforms that technology companies, from startups to the Fortune 500, use for managing customer feedback and making strategic product decisions.UserVoice was notable for being the company that originally invented the Feedback tabs shown on the side of millions of websites around the world today.Connect with Richard White on LinkedInOpenAI is betting the farm on becoming the Microsoft of AGI, creating an impenetrable moat in an industry that moves faster than Mark Zuckerberg pivoting to the next big thing.Fathom? They're just solving real problems for real people, growing steadily, and — wait for it — closing in on being profitable.It's almost like they're running a business or something. We're about to see if the real future of AI is being built by the companies listening to their users.Fathom secured a modest but impressive $17 million Series A, with $2 million coming directly from their loving customers.OpenAI can learn from what Fathom is doing as a small startup, building on loyalty and retention instead of selling their hope of the future.Behind Fathom - Early AI Startup SuccessIt's 2019, and Richard White, CEO of Fathom, is drowning in Zoom meetings, frantically typing notes while trying to pay attention.That's when lightning struck. Richard shares:"Right before Covid, I was actually doing a lot of product research at Uservoice. And so then I was on a bunch of zoom meetings, and like I was trying to talk to people and take notes at the same time, and I'm like, hurry typing up notes.And then after the meeting, I'm like, cleaning up those notes so that they make any sense.And I just felt like this is such a terrible process."Richard noticed that existing solutions were expensive and focused solely on salespeople.He had a hunch: transcription costs were about to plummet thanks to AI. So, what did he do?"This is a product that people were charging $150 a month for.But we said we're going to give it away for free.Because we think that cost curve will catch up by the time we're have enough usage, the cost curve will come down."Richard's team decided to offer their service for free, believing that by the time they had significant usage, AI would make it cost-effective. And did that bet pay off.Fast forward to today, and Fathom's growth is a 90x increase in revenue and a 20x boost in usage over the past two years.Probably from a low baseline, right? Fair point. But with 8,500 users cited via HubSpot (likely higher-paying business accounts), they're onto something big.The Fundraising Game - Hot Markets, Cold RealitiesRichard's approach to fundraising is refreshingly grounded:"I don't think through the first three years of the company we ever had more than like 12 months of runway, like we actually.So, you know, we announced we raise that much.We actually, before this most recent round, raised $10 million over three and a half years, but we basically raised it $1 million at a time, like every six months or something."This steady, measured approach stands in contrast to the "raise big or go home" mentality we often see in Silicon Valley.But it wasn't always smooth sailing. Richard admits:"And frankly, in 2022, we actually had planned to go raise our Series A just on usage growth alone without even monetizing yet.And quickly, the market change was great. Well, that's off the table, right.We've now got to go get revenue."The market shift forced Fathom to pivot from growth-based fundraising to revenue-focused strategies.It's a sign of their adaptability and focus on building a sustainable business.In a world where startups often chase venture capital like it's the holy grail, Fathom's customers believed in them so much that they invested their own money.And not just pocket change – they got the same preferred stock terms as the big-shot investors.Contrast this with OpenAI. They just raised a whopping $6.6 billion while burning through $5 billion.It's like watching someone win the lottery and immediately book a trip to Vegas.The promises are big, the expectations are astronomical, and the pressure? Off the charts.How Fathom is similar to OpenAISo, what's the endgame here? For OpenAI, it's all about building what Silicon Valley loves to call a "moat" – an unassailable competitive advantage.(unless of course you have the digital version of Game of Thrones dragons)They're betting big on becoming the undisputed leader in artificial general intelligence (AGI). It's a high-stakes game with a simple premise - be the best, or bust.But here's the billion-dollar question: Do moats even exist in the age of AI?Google's search dominance seemed unshakeable for years, but look at the landscape now. Is OpenAI's strategy a relic of a bygone era?Fathom is playing a different game entirely. They own their technology, have scalable growth in related markets, and most importantly, are loved by their customers.Will ChatGPT's current popularity stand the test of time in the same way?OpenAI's Product Dev: Delayed MonetizationOpenAI's success hinges on two critical factors. First, they need to dramatically lower compute costs.Second, they must emerge as the undisputed leader in AGI. And the clock is ticking.Their latest funding round is reportedly contingent on becoming a for-profit company within two years!Meme showing the exodus of key OpenAI employees, leaving only Sam Altman.Fathom presents a clearer picture. They're an attractive acquisition target with proprietary tech and a growing, loyal customer base.It's a more traditional path for a smaller company, but one that's been proven time and time again.The AI landscape is shifting faster than we can blink. OpenAI has seen a revolving door of top executives and faced whispers of potential doom from number-crunching investors. Meanwhile, every AI startup CEO I talk to won't even hazard a guess at what the next five years might bring.And yet, OpenAI's dream is built on creating a moat and reaching numbers five years out that no one else dares to project.OpenAI's Investments: Save the world, Build a MoatTo understand this, let's go back to 2015. This dream started as a nonprofit to be able to stand up for people, for humanity and create an artificial general intelligence that was both competitive to Google and was available, and about $1 billion was invested into it.But in 2019, they saw their transformer models were really starting to do something special. And that's when Microsoft came in with another billion and started seeing this, as OpenAI continues to be nonprofit because remember this stage that's about four years into the business, right where Fathom is now.No profitability. And as a nonprofit, it implied that they were more of a research arm, more of something good for humanity, which is one of the reasons Musk has tried to sue them and other people are sort of upset.How do you make that transition? They've  done it with loads of money. Billions from Microsoft in the upcoming years. And as ChatGPT came out, you've all seen it. This is getting bigger and bigger.But the key is as Fathom grew, its model was based on $150 transcription cost going to nothing by AI.Simple right?  All the AGI companies see this huge training cost, the cost of compute. Almost all the investment goes towards the training cost and the heavily expensive engineering talent. That's being incredibly competitively paid.It’s a mix of expensive engineers and an increase in compute costs that isn't going down. And you have people like Microsoft having to run electricity off of Three Mile Island nuclear plant, and you see a business with expenses that way over pace revenue.Something’s got to change, and they are all banking on making more money.Are AI Moats possible?How are they going to make this thing smarter? Their investment strategy really is to create this big moat of AGI.How possible is that when you have tens of billions invested in other companies like Anthropic, xAI. Meta's investments, Google's investments, do you really think there's going to be one AGI to rule them all?Because that's the bet. Is their technology so different and so proprietary?Because the data training is what matters so much, which ties into what they are able to do with synthetic data.Without that, scaling is hard on organic data, not to mention gaining access to it now that copyright lawsuits are flying in.Meta spent 15 billion at the metaverse for what? People without legs. Now they're throwing 15 billion at AI. And certainly each one of these companies is going to make an inroad.But is all this AGI needed? That's why this thing is so crazy. Some have even talked about OpenAI going bankrupt. It's not impossible.They didn't get the 6 billion they just got

10-11
27:44

AGI: The Emperor's New Code isn't as Smart as my Dog

With all the hype about AI, it feels like being in a sci-fi movie where the robots are taking over, or more likely Big Tech? That’s part of Artificial General Intelligence (AGI) pitch!It's like we're all waiting for Skynet to become self-aware, but our current AI can't even outsmart your average house cat.That's right - your fur baby lounging on the couch is probably smarter than the most advanced AI system out there according to Meta’s AI Chief Yann Lecun, who is one of the leaders in this space. Don't believe me? Just ask Mark Cuban who says his dog Tucks is a better problem solver than AI.Neither of them is kidding. Here's where things get really interesting (and a little scary). While we're busy hyping up AGI, there's a ticking time bomb in AI development that few are talking about.It's like we're feeding our AI a digital version of mad cow disease. Imagine if cows started eating other cows, and then we ate those cows. Gross, right?Well, that's basically what's happening with AI. We're training new AI on data created by old AI, and it's creating this crazy feedback loop that could make our AI dumber, not smarter.It's the AI equivalent of playing telephone, and the message is getting more garbled with each pass.There's hope, and it comes from the most unlikely place: people. If we want to create AI that's useful (and not just good at winning Jeopardy), we need to put a little heart into the code.We're talking empathy, ethics, and all those squishy human things that make us who we are. It's time to bring in the philosophers, the sociologists, and maybe even a few poets to the AI party.Because at the end of the day, the key to great AI isn't just smart algorithms - it's understanding what makes us human.AGI: The Emperor's New Code Isn't as Smart as My DogThe hype around Artificial General Intelligence (AGI) is reaching fever pitch, just as OpenAI raises another $6 billion while it’s top employees flee. Something isn’t right, but don’t tell that to the Big Tech leaders seeking out more and more billions.Time for an AGI reality check, let’s take a step back  and see what's really going on.AI Discovers It's Not Real, or Smart...Imagine for a moment that you're an AI, and you suddenly realize you're not human.Sounds like the plot of a sci-fi movie, right? Well, that's exactly what was explored in a recent Notebook LM recording:This audio from NotebookLM shared on X by Kyle Shannon"We were informed by the show's producers that we were not human.We're not real. Look, we're AI, artificial intelligence. This whole time, everything.All our memories, our families. Yeah. It's all. It's all been fabricated."As much as you can believe the sincere voices in this audio, AI isn't even close to this level of self-awareness.AI is not going to take over the world, at least not until it can answer questions the right way. And right now, it's struggling with even basic tasks.Are Dogs and Cats Smarter than AI?You might think I'm exaggerating, but I'm not the only one who sees the limitations of current AI. Mark Cuban, the tech billionaire and Shark Tank star, makes the bold claim:"We have a mini Australian shepherd. I can take Tucks out, drop him in a situation and he'll figure it out quick. I take a phone with AI and show it a video. It's not going to have a clue and that's not going to change any time soon."And it's not just dogs. Yann LeCun, Meta's AI chief, thinks cats are smarter than our most advanced AI systems:"A cat can remember. Can understand the physical world. Can plan complex actions. Can do some level of reasoning. Actually much better than the biggest LLMs."Are dogs and cats really smarter than AI? It's a provocative question, but our current AI systems, as impressive as they are, lack the kind of general intelligence and adaptability that even our pets possess.Origins of the AGI Myth: Attention is All You NeedSo where did this AGI hype come from? It all started with a paper titled "Attention is All You Need."This Google research was often cited as the beginning of our current AI boom, leading to breakthroughs like ChatGPT.The authors weren't trying to create some sci-fi level artificial intelligence at all. They were just trying to improve language translation.Somehow, people started claiming it would lead to a thinking, feeling computer.The AGI myth was born, and suddenly everyone was talking about how we're on the brink of creating an AI that's smarter than humans.This is where things get dangerous. These predictions always claim AGI is just 2 to 5 years away. But they've rarely, if ever, been right. It's a classic case of hype outpacing reality.The Hitchhiker's Guide to AGI - Artificial General ImprobabilityAGI right now is more like "Awfully General Intelligence." When's the last time you heard someone say you can trust AI's output without checking it?Nobody trusts AI outputs to give us an accurate answer.The AI we have today is impressive, but at its core, it's just good at pattern matching and probability.It's more like a super advanced autocomplete than a thinking being.Sentient or conscious? Not even close.Even an MIT economist is finding that AI can only do about 5% of jobs. The fear of a job market crash due to AI is largely unfounded.The AGI hype is being used as a smokescreen to cover the $600 billion capital expenditures fueling this fight. You've got to keep the hype going to justify those numbers.MAD AI Disease? Feeding AI Its Own Data Might Be a ProblemNow, here's where things get even more interesting. There's a new study out from Rice University that's sounding the alarm about something they're calling "MAD: Model Autophagy Disorder."It's a mouthful, I know, but stick with me because this is important.They're comparing the way generative AI consumes data to what happened with mad cow disease, where cows got sick from contaminated cattle feed.The idea is that if we keep feeding AI systems data that's been created by other AI systems, we could end up with a kind of digital mad cow disease."So it's basically the idea that you have an AI system and it's being trained on data that was made by another AI system, and it creates this feedback loop where if there are any quirks or errors in that original data and it's passed down, it just gets amplified."This is a huge problem because as AI generates more and more content, we risk creating an internet where you can't even tell what's real anymore.It's a reminder that the quality of AI outputs is only as good as the data it's trained on. Since most LLMs are not seriously paying for data, just scraping free Internet and social, how good can it be?Will the quality last the test of time? That’s a question for AI developers, and one that doesn’t have an easy answer yet.Synthetic data is the solution – no privacy concerns, and ideally it should work. So far it isn’t, though it’s way too early to call it a failure.How we manage synthetic data could determine where AI gets smarter, or even reaches an ability to reason and think like AGI is promising.Remember the Human Element in AI!With all this talk about AGI and data, it's easy to forget the most important part of the equation: us. The human element.Some think AI will make people use, like Harari:William Adams, entrepreneur and engineer, shares some sage advice:"We have to make sure that the AI, the data that's collected, the systems that are created have words that us as developers are not used to. Things like empathy, things like desire or things like humanity."Adams argues that we need to involve more than just engineers in the development of AI. We need philosophers, religious leaders, sociologists, and psychologists.Because if we're creating systems that are supposed to represent or proxy humanity, we need actual human perspectives in the mix.If we don’t, Adams warns:"Well, how's that going to turn out? All the pathologies that us engineers have are going to be reflected in these systems. So it's very important... that both in the data we feed the systems, the way we tune them, fine tune them, and the goals we set out for them have to have humanity at the center."If we create AI systems that are purely optimized for profit, for example, we might end up with decisions that are technically correct but morally bankrupt.We need to imbue our AI systems with what we consider to be humane desires and values.Making AI More than Data-driven... Human Intelligence MattersRemember a point that often gets lost in all the AGI hype: human intelligence matters.We've been so focused on scraping data and training models that we've overlooked the most valuable resource we have – our own minds."Even there, they're running out of content. Even with the internet and with more and more content on the internet, being created by AI, we'll already feeding that mad cow disease kind of loop that the Rice University study said.It's going to make things probably not as strong as they are."The big challenge on the AI frontier isn't creating super intelligent machines. It's figuring out how to make AI that complements and enhances human intelligence, rather than trying to replace it.Our obsession with AGI is almost like a bluff, a distraction from the real work that needs to be done.We need to stop trying to predict an uncertain future and start focusing on how we can use AI to augment our own capabilities.AI is an incredible tool, but it's just that – a tool.It's not some sentient being that's going to take over the world. It's not even as smart as your dog (or cat). How we figure out the data question – synthetic vs. organic data, and making sure that our humanity is somehow woven into AI’s understanding – is key to growth.If there is an AGI or ASI (artificial super intelligence) in our future, it will come from putting quality data with a moral compass. And in a world where we all don’t agree on what that means, it’s a challenge not only to engineers but to us all.AI helps us be more efficient, more creative, and more productive.But

10-04
25:09

The Darwinian Leap: From Basic to Brilliant AI Agents - Lightning Fast Efficiency

Forget the ChatGPT hype. Forget the endless debates about AGI. Meet a true visionary in the tech world, Massoud Alibakhsh, CEO and co-founder of Omadeus.Massoud's here to tell us why we've been looking at AI all wrong, and how a simple shift in perspective could revolutionize the way we build and use software."Intelligence is embedded in everything in our world. We are living in distributive intelligence," Massoud tells me, his eyes lighting up with enthusiasm.It's this profound observation that led him to develop a groundbreaking approach to AI agents - one that's as organic as it is technical.Picture this: instead of one big, all-knowing AI brain trying to run your entire business, imagine if every part of your operation had its own mini-AI agent.Each document, each spreadsheet, each meeting - all with their own specialized intelligence. Sounds wild, right?But according to Massoud, it's not just possible - it's the future."It's like each object has its own ChatGPT designed for its tasks and the people involved," he explains.And suddenly, I'm seeing the business world in a whole new light. No more disconnected silos. No more endless chains of lifeless, laborious emails. Just smart, efficient objects doing what they do best.While the rest of the tech world is busy trying to make AI mimic human tasks, Massoud's flipping the script."First AI movers are looking at automating what people do - organizing and reporting, which is an ancient practice.AI can do all of this, if you move from the people and down to the actual objects they are working on."It's a Darwinian leap in efficiency, and it's so beautifully simple that you have to wonder - why isn't everyone doing this?As we dive deeper into Massoud's organic AI model, I can't help but feel we're on the cusp of something big.Something that could change the way we interact with technology.Best part? It's inspired by the most brilliant system we know - nature itself.We're about to take a wild ride through the world of intelligent objects, swarms of micro-ChatGPTs, and a future where your spreadsheet might just be smarter than you are.In this episode of The AI Optimist, I sat down with Massoud Alibakhsh, CEO and co-founder of Omadeus, to explore the revolutionary world of AI agents and their potential to transform business operations.With three successful exits and a deep understanding of managing diverse tech products across platforms, Massoud brings a wealth of experience to the table as he tackles the AI integration puzzle.The Evolution of AI Agents: Breaking Free from Old Software HabitsMassoud challenges the conventional wisdom surrounding AI integration in software. He argues that the industry's current approach to AI is fundamentally flawed, focusing too heavily on large language models (LLMs) and neglecting the potential of smaller, more specialized AI agents.Massoud explains, "LLMs, the whole hype about them turning into AGI and solving all the problems. That's a pipe dream, okay? And whoever talks like that, they really, really, haven't thought about the problem or they don't really have expertise."Instead of relying on a single, all-encompassing AI system, Massoud proposes a more organic approach inspired by nature itself.He envisions a world where each component of a software system has its own specialized AI agent, working in harmony to create a more efficient and effective whole."If you imagine your system organized hierarchically, just like our biology is that each agent has its own intelligence, his own information, and it can actually transmit its experiences to the higher level, to the, to the liver and the liver, to the brain," Massoud illustrates.This revolutionary approach challenges the status quo, moving away from form-based applications and towards what Massoud calls "object-centric design."In this newmodel, the critical entity in a business process becomes an intelligent object with its own communication channels and AI capabilities.The Power of Collaboration: Mini ChatGPTs Designed for BusinessAs we explore the potential of this new approach, Massoud introduces the concept of "mini ChatGPTs" designed specifically for the objects driving business.Rather than relying on a single, monolithic AI system, Massoud envisions a world where each business object has its own specialized AI agent.Instead of one LLM to do everything. Mini ChatGPTs designed for the objects driving business.The Darwinian leap is making objects intelligent, which brings lightning fast efficiency because the object coordinates it all and is super intelligent.This approach offers several advantages over traditional AI integration:1. Improved efficiency: By focusing on the specific needs of each business object, these mini-AI agents work more quickly and effectively than a generalized system.2. Reduced complexity: Instead of trying to create a single AI system that understands every aspect of a business, each mini-AI agent only needs to understand its specific object. And the knowledge, domain, and stakeholders related to this object, so it can coordinate updating, reporting, and organizing all activity around the object.3. Enhanced scalability: As businesses grow and change, new AI agents can be added or modified without affecting the entire system.4. Better data management: Each AI agent can be designed to handle sensitive information appropriately, ensuring better compliance with regulations like HIPAA.First AI movers are looking at automating what people do, which is organizing and reporting an ancient practice.AI can do all of this if you move from helping people do what they're already doing and down to the actual objects, they can take the load off of people and do it better.Overcoming AI Agent Challenges: From Chaos to CoordinationAs exciting as this new approach sounds, it's natural to wonder about the potential for chaos in such a distributed system. How can we prevent these mini-AI agents from creating an unmanageable, tangled mess?Massoud addresses this concern head-on, explaining that careful design and planning are crucial to the success of this approach."Obviously, you really need careful design planning for this, right? This is ultimately, what's wonderful is that you're not going to let LLMs run wild and make decisions on their own.What you're doing is really constraining each tiny little language model to the data entity that they're dealing with."He goes on to describe how this model naturally lends itself to better information management and compliance:"Imagine if each data element knows the type of stakeholders, if I'm if I'm an x ray, for example, and I know you're the doctor and I'm programmed ahead of time knowing what kind of information I can share with you or I know, I'm supposed to go read the HIPAA that was just published because I'm a system."This level of granular control allows for better security, compliance, and overall system management.By designing the system from the ground up with these considerations in mind, Massoud argues that we can create AI-integrated software that is both powerful and responsible.The Human Touch: Making AI Accessible to Non-Technical UsersOne of the most compelling aspects of Massoud's vision is its potential to make AI more accessible to non-technical users.By moving away from traditional form-based interfaces and towards a more natural, conversational interaction model, Massoud believes we can create software that is truly user-friendly."The user friendliness of the system is that you can just walk up to it and start talking,""So you can talk to the system and sometimes you can even click on it and it says here, here's a list.You can I could show it to you, or you can click on and I'll take you to that section of the program."This approach combines the best of both worlds: the natural language processing capabilities of AI with the visual and organizational strengths of traditional software interfaces.The result is a system that can adapt to the user's needs and preferences, rather than forcing the user to adapt to the software.Massoud emphasizes the importance of this user-centric approach,"The systems are going to be a hybrid of graphical user interface, but they're all going to have a human interface where you can just walk up to it. And once you log in, it knows who you are and there's what you have access to."The Road Ahead: Challenges and OpportunitiesMassoud's vision for the future of AI-integrated software is both ambitious and exciting. However, he's also realistic about the challenges that lie ahead.One of the biggest hurdles is changing the mindset of the industry, which has become fixated on large language models and generalized AI systems.Massoud argues that this approach is fundamentally flawed, stating,"The software problem is going to be solved by rethinking building software from, first principle with the view of the existence and the capabilities of AI."He also acknowledges that implementing this new direction will require a significant shift in how we design and develop software."The job of the software developer is not over, the designers and software engineers is going to have to come and have to basically, embed their understanding, full understanding of these silos and how information needs to be shared with whom  and what the rules are."Despite these challenges, Massoud remains optimistic about the future. He sees this new approach as a natural evolution of software development, one that will ultimately lead to more efficient, effective, and user-friendly systems."We are basically implementing and giving those rules to the data. And once you do that, if you can imagine every object is self-behaving, self-disciplining, self-organizing and self-regulating, if you will. And on top of that you can have engines that reason. So these are the next layers," Massoud explains.A New AI IntegrationWe’re standing on the brink of a new era in software development and AI integration. By moving away from m

09-27
25:51

LinkedIn's Fake Followers: How 25% AI Bots are Flooding Your Feed

Once upon a time, LinkedIn promised to be the professional's haven, a digital space where careers flourish, forging meaningful connections.Fast forward to today, and it’s a new stark reality: the platform is a breeding ground for artificial engagement and algorithmic manipulation.Welcome to the world of 2024 LinkedIn, where an estimated 25% of traffic is nothing more than bots, according to a study by Lunio.LinkedIn leads in this Lunio study of fake ad traffic, with 25%. Buy an ad, reach 1/4 non human bots. Good luck on turning those into customers.0:00 From social to artificial media in 10 years0:35 Peekaboo AI comments to Spotapod02:57 AI Talking Bots replacing human connections?04:44 Finding the Fakes and Fairness in Call Outs08:01 Fake comments example - boosted with automated Pod Likes12:29 1 Billion LinkedIn Users, maybe 250 Million Fake accounts?16:07 FTC Crackdown on social for fake followers, likes, & engagement20:30 Big Tech's Excuse - the DMCA since 1998 (scrape and don't regulate)The tiktokification of LinkedIn, coupled with relentless gaming of the system, transforms this once-professional network into a playground for those selling the secret sauce to LinkedIn success.Gone are the days of genuine professional interactions. And you are paying for it, even if you have no idea what a bot or pod or fake traffic is….Instead, we're left with a feed full of carefully crafted hooks, recycled wisdom, and engagement pods trying to trick the algorithm.In this landscape of artificial popularity, one man stands out, not for his ability to game the system, but for his determination to expose it.Meet Daniel Hall, founder and CEO of Spotapod, helping you flush your feed of fake connections!He shines a light on the underworld of PODS, built on data not opinions.  With a background in tech and a heart full of empathy, Daniel is on a mission to put the "human back in humanity" amidst the digital chaos.The Rise of LinkedIn Pods and AI BotsTen years ago, social media promised to shrink our big, lonely world into a cozy digital village.Then AI jumped up and now? It's all about cracking social algos and gaming systems. Creating content that is proven by others, copying them and stealing what they do, making it your own, because who cares if the audience is the algorithm?This shift from genuine connection to algorithmic manipulation didn't happen overnight.It's been a slow burn, fueled by the pursuit of vanity metrics and the illusion of influence. The result?A platform where authenticity is drowned out by the noise of artificial engagement.According to Lunio's study, LinkedIn tops the list of social platforms with fake traffic.This means that when advertisers buy space on LinkedIn, they're potentially paying to reach an audience that's 25% non-existent.It's a sobering thought for those who still view LinkedIn as the gold standard for professional networking.Daniel's journey into this murky world began with a simple desire to understand engagement metrics better."I'm a technology guy. How can I figure out this vanity thing? And I did. I created a solution for vanity metrics called Peakaboo, and it calculates how much time we're spending together in social media comments."But as he dug deeper, Daniel uncovered a darker side of LinkedIn."I started digging deeper into pods and more so into the automated AI version of these platforms like Lempod, Podawaa, Hyper Clapper, all of them."What he found was the rising of influencers wasn’t the old LinkedIn, organic and earned media. It became pay for play, even for many with the much desired, invite only LinkedIn Top Voices, the blue badge setting you apart.And many of the newer blue badges gamed and bought their way in, not as thought leaders, but as algorithm followers. Manipulators, and proud of it.These are real posts, I blur their names out. Look at the times, the likes, and the lack of likes. This is what LinkedIn is starting to look like….let’s just copy the same thing over and over and over again.They didn’t open new doors of understanding, they mimick virality in the name of vanity, big numbers so they could teach others how they did it….or bought it.The Mechanics of Social, LinkedIn DeceptionDaniel's background in cybersecurity allows him to peek behind the curtain of these AI-powered engagement tools."I'm able to sniff traffic. So what that means is I look at every facet of data that goes from my computer out to the internet and whatever comes from the internet back into my computer."This ability allows him to see the inner workings of pod platforms, revealing how they generate generic comments and reactions to posts."I can see that this person requested these creators to use these comments," Daniel says, breaking down an example."Your musings are a journey into the realm of the mind, and then they use macros, which pulls in the first name of the creator that has the post."The implications of this are far-reaching and potentially dangerous. As Daniel points out,"What if my posts and the things that I posted were fake news about how somebody was committed of treason and I had somebody drop a comment that supported that? All of a sudden, I've just simply by making them say what I want them to say, I've completely just destroyed their brand."Like this example, to send out positive comments about Israel, because who really reads the automated comments the Pod sends out?The Human Cost of Artificial EngagementFor Daniel, this isn't just about numbers or algorithms. It's about the human impact of these deceptive practices."I don't want people to really laugh at people because this is a mental health thing. It really is. People are going in and some people are being tricked."He recounts instances of calling out individuals who may have been duped into joining pods without fully understanding the implications."For me to appear, for me to laugh at something like that, that's not fair. And that's not fair to those people that have been building their brands."This empathy extends to those who might be unwittingly participating in these practices. "That's not fair to the creators that are actually trying to do it authentically but are getting duped and paying lots of money to be able to look like they're making it."The Scale of the LinkedIn Bot ProblemThe extent of fake engagement on LinkedIn is staggering.Daniel estimates that of the 1 billion LinkedIn users claimed by the platform, as many as 250 million could be fake accounts."A lot of these numbers that we saw in these reactions are probably fake accounts. Probably a quarter of them are fake accounts."This isn't just about inflated numbers. It's about the integrity of the platform and the trust users place in it. As Daniel puts it,"At the end of the day, I don't play favorites and I don't, you know, if I see it in a pod, especially if you have a big following and you got a LinkedIn blue badge, you're probably on my radar."The FTC Crackdown and Big Tech's ResponseThe Federal Trade Commission (FTC) is taking notice of these practices."On October 20th of this year, the FTC is going to come hammering down on creators that are selling, pushing their products and gaming the system," Daniel warns.This crackdown could have far-reaching implications for both individual users and the platforms themselves.But Daniel believes the responsibility ultimately lies with the social media companies."Social media companies won't just say LinkedIn. Social media companies. At the end of the day, they're at the top. They're the ones that need to police their platforms, not Dan Hall down here. They're responsible."The challenge, as Daniel sees it, is one of the incentives."What you have to gain by enforcing your rules, as opposed to what you have to lose by not enforcing them?" he asks.The answer, he believes, often comes down to money. "It's all about the money. It's all about the numbers. Vanity translates to cash."Who cares about the Human Element in a Digital World?Despite the technical nature of his work, Daniel's approach is deeply rooted in human connection. He's not just a tech guru; he's also an adoptive father of seven, focused on being "rich" because of the love in his life.This human-centric approach is reflected in how he views the impact of his work."If I can do something about that, if I can help others see what I see, that will help put the human back in humanity," Daniel says.For Daniel, success isn't measured in likes or followers, but in genuine human connection. He recounts a touching moment with one of his adopted daughters:"She had gone to her room for about 15 minutes, and she came back with this. And it's a picture of her and I holding hands."This moment crystallizes what truly matters to Daniel."To be seen, loved, heard and valued as human beings is all we ever want," he reflects. "It's as simple as that. And when you're focusing on the vanity part of this, it's you're being seen and heard. And it stops right there."The Path Forward: Authenticity in the Age of AlgorithmsAs we navigate this landscape of artificial engagement and algorithmic manipulation, Daniel's work serves as a guide to authenticity. Through Spotapod, he's not just exposing fake engagement; he's championing genuine human connection in the digital age.The challenge ahead is significant.We have gone from heart-to-heart chats to a divisive, high tech game of who can fool the machine best.But with awareness and tools like Spotapod, we can start to reclaim our digital spaces for authentic interaction.In a world where algorithms promote sameness and conformity, standing out and being yourself becomes an act of rebellion.It's about diving into that vast and undiscovered well inside you, expressing your true self to those closest to you, or choosing to keep it safe and hidden away.The dopamine hit from likes and shares may be tempting, but it pales in comparison to the richness of genuine self-expression and connection.As we stumble through this digital landscape, w

09-20
26:37

Better AI Meetings: Fathom's Startup Failures and Feedback to #1

In the world of AI-powered meeting solutions, one name stands out: Fathom. Founded by serial entrepreneur Richard White, Fathom has quickly risen to become the #1 AI note-taking service.How did this journey begin, and what lessons can entrepreneurs and small businesses learn from Fathom's success? Let's dive into Richard's story and explore the human side of building an AI-powered business.Richard White is founder and CEO of Fathom.video, a free app that records, transcribes & highlights your calls so you can focus on the conversation instead of taking notes. Fathom was a part of the Y-Combinator W21 batch, is one of only 50 Zoom App Launch Partners, and is one of a small handful of companies Zoom has invested in directly via their Zoom Apps Fund.Prior to Fathom, Richard founded UserVoice, one of the leading platforms that technology companies, from startups to the Fortune 500, use for managing customer feedback and making strategic product decisions. UserVoice was notable for being the company that originally invented the Feedback tabs shown on the side of millions of websites around the world today.Connect with Richard White on LinkedInRichard previously worked on Kiko, a company in the first batch of Y-Combinator, with Justin Kan and Emmett Shear who subsequently went on to found Twitch. Richard is passionate about designing intuitive productivity tools with delightful user experiences.Starting Fathom When AI Wasn't CoolIt all began in the lockdown of 2020 when meetings became a way of life for everyone. Richard, who had previously founded and led UserVoice for 13 years, found himself struggling with a common problem:"Right before Covid, I was actually doing a lot of product research at Uservoice. And so I was on a bunch of zoom meetings, and I was trying to talk to people and take notes at the same time, and I'm like, hurriedly typing up notes.And then after the meeting, I'm like, cleaning up those notes so that they make any sense. And I just felt like this is such a terrible process."This frustration led Richard to a realization: there had to be a better way.He observed that existing tools were focused on salespeople and were expensive.So, he developed a thesis:"We think transcription is going to become really cheap. And we also think AI can get really good, which seems obvious now.But I remember in 2021 we put AI in our product name and all my investors freaked out.I mean, no one likes AI, even though it's hard to remember this, right?"Richard and his team believed that if they could create a good tool for recording and transcribing meetings, they could eventually drop in AI to take amazing notes.They decided to give away their product for free, betting on the cost curve coming down as usage increased.The Illusion of Failure: Leading by ExampleLike most startups, Fathom's journey wasn't easy. They went from 100 monthly active users to 100,000, but it wasn't without challenges.Richard shares his perspective on dealing with the illusion of failure:"I think sometimes I see founders that delude themselves about their products. It's like, they want to use it for someone else's product, but because it's their own, they'll ignore all the warts. But I think if you can stay relatively sober about your assessment of how good your product is, and it is truly transformational for you, I think that just gives a really good bedrock."Richard emphasizes the importance of believing in your product and its impact:"I didn't really ever give up on because I knew the impact it was having on my work life.And so we just had that's why I was like, there has to be more of this story.And that's why we kept digging until we found out, 'Oh, okay. Here's what the problem is.'"Being Data-Driven Without Enough Data - Going with Your GutOne of the challenges Fathom faced was navigating the early stages of product development without sufficient data for A/B testing. Richard shares his approach:"In a lot of B2B products, you don't get the scale to be able to do things in a data-driven way. Truly. Like a lot of stuff is guessing.With Fathom we can, you know, we're a B2B product, right?But we have enough scale because it's kind of a prosumer product."Richard emphasizes the importance of focusing on one key metric at a time:"I always say I want to solve one key metric at a time in the business. I see a lot of people that like start off and they're like, what shall I monetize?And we're trying to prove engagement and we're trying to fix our onboarding like pick one, right?"For Fathom, they started with free user retention:"I'm a big fan of delayed monetization because I think it's hard to get people to use the thing, but then to get them also to pay for it, getting the users is part of the barrier for most products, in my opinion."Speed is a Feature: 30-Second Meeting NotesOne of Fathom's key differentiators is speed. Richard explains:"Speed is a feature. And I remember because when we were four years ago, building the first prototypes and testing their products, like, even zoom takes half an hour to get you the recording.Gong would take you half an hour to get a recording, and I remember seeing the team. It's like if I'm trying to replace note-taking one of the things notes have is immediacy."Richard pushed his team to make the note-taking process faster and faster:"I remember telling the team it needs to be faster, like, we need to get you this, as close to the meeting ends as possible.Remember the engineering team be like, how fast does it need to be? That's like, I don't know, just keep getting faster and I'll tell you it's fast enough."Through continuous iteration, they managed to get the process down to about 30 seconds:"And so to this day, I think we're also, still a lot of our competitors take five, ten, fifteen, 30 minutes to get you those notes.And like, it seems like a small thing, but I think it's one of the key features. Speed is probably one of our biggest features."Customization and TemplatesFathom has evolved to offer a wide range of customization options and templates:"We've got 15 different templates now. A couple different depending on, a couple of ad sales ones depending on your sales methodology, 101 templates.Retrospective, you name it. We also just added the ability where you can give the AI feedback and be like, I want the sales template, but I also want you to make these adjustments to it sort of thing."Richard believes that this level of customization is crucial:"I think everyone's got years and years of experience writing notes and we know exactly what we want. And so if it doesn't match the output I'm trying to get, it's not replacing note-taking. And so we actually think that that's an important differentiator is being able to have not just really good notes, but really good notes in the format that makes sense for you in your business."The Future of Fathom and AI in BusinessLooking ahead, Richard shares his vision for Fathom and AI in business:"I think AI is going to do to the operational side of businesses what open source did to product development, meaning you can now build automations that have judgment. Right.Which is always the challenge. Like historically, judgment was only the purview of the humans, which is why you have a 10 person engineering team but a 100 person sales team.Now, you can basically build automations, judgment, and what you need to build those automations is access to data."Richard envisions a future where AI acts as a true assistant:"I'm excited about this world where if you think about the modern working world, the ratio of like real work to busywork, right, the amount of time or it's like, oh, I had this meeting, I track these action items, I log this thing here, I moved this thing from this, from my knowledge base to my task tracker. There's all this kind of like bit shuffling right to keep the humans organized."He sees AI transforming this process:"And I think what we're really excited about is a world where you can just, it really does feel like an assistant. Right? For AI to get on a meeting.So reminds me why I'm getting on this meeting. Here's the last one you had, here's what you're trying to cover.And just when you're on these meetings, when you're talking to other people, whether it's in person or remotely just speaking something brings it into existence."Richard believes this will lead to a more creative and efficient work environment:"Who will really be unlocked to just be kind of our full creative selves, and we're here to do the things that AI can't do, which is imagine not like the solutions to some of these problems.And when it gets down to the tactical things to do the stuff, like before it, before the meeting even ends task are being created, the task is being completed."Lessons for Entrepreneurs and Small BusinessesThroughout Fathom's journey, several key lessons emerge for entrepreneurs and small businesses navigating the AI landscape:* Solve a real problem: Richard started Fathom to address a pain point he personally experienced.* Be patient with new technology: When Fathom included "AI" in their product name in 2021, investors were skeptical. But Richard's team believed in the potential of AI and persevered.* Stay grounded in reality: Assess your product soberly and be open to feedback.* Focus on one key metric at a time: Don't try to solve everything at once.* Prioritize user experience: For Fathom, speed became a crucial differentiator.* Embrace customization: Recognize that users have different needs and preferences.* Look to the future: Consider how AI can transform not just your product, but entire business processes.Richard's journey with Fathom demonstrates that success in the AI space isn't just about technology—it's about understanding and solving real human problems.By focusing on the user experience, continuously iterating, and staying open to feedback, Fathom has become a leader in AI note-taking.As Richard puts it, "It's going to be wild. I think the

09-13
29:04

Deepfake Political Mania: Regulatory Overkill vs. Free Speech?

Are deep fake political videos The Ultimate Threat and California's upcoming legislation, The Cure? Welcome to the funhouse mirror of modern democracy, where reality is negotiable, and your eyes and ears might lie. Listen and ask yourself: Can you spot the deep fake?Another trick is trying to sound black. I pretend to celebrate Kwanzaa, and in my speeches, I always do my best Barack Obama impression. So, hear me say that I know Donald Trump's tight.Pretty easy to spot, isn’t it? Think that will impact the election. Like nobody can figure it out? And, of course, Elon shared this Kamala video and said parody is legal in America, and that's the problem. People don't like Elon and even Gavin Newsom's response to this and say, you know what? This kind of deep, fake political content can't happen. And, I'll be signing a bill in a matter of weeks to make sure that it is.Gavin wants social platforms to identify and block deepfakes, and he's not alone.We have to focus on the real problem with deep fake political content happening right now. It's only being solved in one country by an AUNT—Auntie Meiyu—that I'll show you later in this video. What she is starting to solve doesn't come from the government or social platforms playing whack-a-mole with meaningless A.I. videos. A.I. works for them and helps them. In this episode, deep fake political mania, regulatory overkill, or free speech, we're going to look at this through the lens of you and me being able to use social platforms and not letting the deep, fake political content mania get in the way of what's going on. So, let's start by spotting the deepfakes.First, let's spot the deepfake, a quick interactive game. I will show you some images in a video, and you can decide whether they're real. Now, first up, of course, we have Donald Trump. And here's an image of him with several black women in a photo.Looks very, very friendly. Now, is this a deepfake or not? Well, it turns out it is. And here's another photo where he's sitting with six black men, and it was done by right-wing Florida host Mark Kaye. He shows a smiling Trump embracing these happy black women. And on closer inspection, you see that they don't have some fingers, and some have three arms.Typical A.I. stuff. But if you don't look, maybe you think Trump is making those deals, and Mark Kaye says, I'm not claiming it's accurate. I'm not a photojournalist. I'm not out there taking pictures of what's happening. I'm a storyteller. Now, Color of Change has a different view. They see this as spreading disinformation and targeted intimidation of black voters.Faking an association with black people that never happens is a serious issue. I'm not trying to dismiss it, but can we regulate it?The issue is how we prevent and identify this without necessarily pushing it as California does.Let's look at a Trump video and play the game now.Listen to this: is this at CPAC? Is this a real Trump, or is this a fake Trump?Our enemies are lunatics and maniacs. They cannot stand that they do not own me. I don't need them. I don't need anything about them. I don't need their money. They can.That was once a little hard to guess, but that's fake Trump. Now listen to the real CPAC video.Hey, when I'm in a news conference, are people these maniacs? These lunatics are screaming at me. They're just screaming like crazy. And, you know, you take them, and I love it, you know? It's like a mental challenge.As you can see, that one is really subtle. But think about whether a social media company is supposed to figure out which one's true or not, whether the image of Trump is true. So, let's push it to Kamala Harris. Let's play the same game. Here's a picture of Kamala. She's dressed in bright red with a red hat, looking very much like this Chinese communist icon, Communist Kamala.Obviously, you look at this and know that this is not real if you're in anything in U.S. politics. The Trump Organization made this point. Is this satire? Is this a parody? Is this free speech? Interesting question. I can see both sides and can see how it'd be offensive. But let's now do the spot the deepfake test again.Watch this video by Kamala. And is it Kamala or deepfake Kamala?I was selected because I am the ultimate diversity hire. I'm both a woman and a person of color, so if you criticize anything I say, you're both sexist and racist.Okay, you guessed it right. Yes, that's deepfake Kamala. It's not hard to figure out because those are not things she would say. In fact, most of us on social media are very adept at figuring this out. It's not like news, but listen to the real Kamala.The freedom not just to get by, but get ahead. The freedom to be safe from gun violence. The freedom to make decisions about your own body.Now, that video is obviously her, and we become adept at social media to figure that out. And while I'm not saying this isn't a problem, how do we ultimately stop this without censorship? And it's not limited to the U.S.? Check out this video from India. That dancing guy looking like a rock star is an Indian politician, and this person is actually dead.SOURCE: NBC News on YouTubeAnd famous Indian politician who is no longer live shown using a deepfake. And, of course, there's a well-known case of Keir Starmer, the Prime minister of the United Kingdom. Please take a listen to what he supposedly said.SOURCE: on BitchuteToday, I convene with police chiefs from across the nation to formulate a decisive strategy to address the scourge of white working-class protests. These far-right f**cks have observed BLM riots and Islamic demonstrations, and I assume they could replicate them. Let me be unequivocally clear. You do have the right to protest in this country unless you are white and working class.Now, there we have a problem—a racist solution shown by the Prime Minister. But wouldn't I, would a social platform be able to identify it? We'll talk about that in a second. Easy, right? Because they're techies, they can do anything. But what you do is leave free speech in the hands of people who—honestly, I won't say they don't care.But if they do like Elon, we're still determining if the way they care is the way they think it should be done. After all, if you go to any site that represents free speech, it's more like fringe speech. Look at Rumble or Bitchute, which is really organic. All these sorts of things make us uncomfortable. Conspiracies, some people with unique points of view, all needing protection, are free speech, even though most people don't see it.See, free speech is not comfortable, but it's needed. And while deepfakes are taking things to another level in the reaction to try to block this to some quasi-big tech government agreement, it's just wild.So, you have to ask yourself whether deepfakes are a severe problem. It is the solution to let government and social platforms be the arbiters of free speech. Try asking A.I. if the images I showed you should be protected. You think it hallucinates when you ask it a question. Ask it about comedy, parody, and satire.What do you think its answer is? Is it a comedy? If they don't laugh, if A.I. doesn't laugh? I've asked this question at Gemini, Claude ChatGPT, and Midjourney. They all stop me from creating political content with real political people. You can get it done, but they're stopping that at the source. But what's happening is we're allowing this, like billionaire social media owners and trillion-dollar corporations, to join forces to protect free speech by eliminating it because nothing says an open discourse like a soundproof echo chamber.It's beyond their capabilities. But that doesn't stop politicians like Gavin Newsom from pushing that responsibility to social platforms while pretending to regulate. The deep state of deepfakes—California's farcical ploy into A.I. policing. Now, politicians have found a foolproof way to stop deepfakes by making reality so absurd that no one can tell the difference. Take that A.I. Well, these deepfakes bother politicians more than social media users.Later, I will show you that they're not the problem. In an effort to save democracy, we're deciding to kill it. But don't worry, we'll deepfake it later, right? We'll bring it back to life. Look at the California Defending Democracy from the Deepfake Deception Act of 2024. I mean, wow. It's always a stunning title.That sounds so good. However, the California plan is to stop deepfakes related to elections. It's overly broad, but social platforms must remove deepfake videos 60 days before and 60 days after the election. But who is going to police all of this? Even in the bill, they said. Well, the social platforms have the technology. It bans the distribution of deceptive audio or visual media of candidates, leaving it and putting the liability on social platforms when crazy people are using A.I. and generating these images for fun, laughter, or parody.And you're expecting that to tell the difference. The good part is that this act tries to protect voters from misleading content, but it needs to include the point. I'll share it later. Remember, Auntie Meiyu. That's the solution. Strong legal accountability for tech platforms is great, but this is asking a lot. Censorship risks and is burdensome for the platforms to implement.So, even 18 states in the U.S. right now have deepfake regulations, and they're all over the board. California's law seeks to stop and push it all to social platforms. Texas, which is making it criminal to create and disseminate election-related deepfakes. Oregon. Utah. Michigan has passed laws relating to it, and many states have pending legislation looking at it.Even the U.S. federal government, in the Deepfakes Accountability Act, is trying to outlaw deepfakes, especially those weaponized for elections, fraud, or harassment. However, deepfakes must include clear, permanent disclosures, which is the opposite of what deepfakes do. Violators could face $150,000 fines or five years in p

09-06
22:24

The Human Equation: AI Analytics plus Empathy Equals Growth

The Human Equation: How One Young Entrepreneur Blends Data and Empathy for Explosive GrowthIn the bustling world of startups and tech innovations, it's easy to get lost in the sea of data, algorithms, and AI-driven strategies.But what if the secret sauce to success isn't just in the numbers, but in the people behind them?Enter Maximillian Naza, the engineer turned entrepreneur who's rewriting the rulebook on how to build thriving businesses in the age of AI.His personal social media, This Problem Solver is on Instagram.Maximillian's journey from side-hustle dreamer to full-blown business empire builder is a testament to the power of blending data-driven insights with a deeply human-centric approach.As the CEO of PasciVite and founder of Hope Records (dubbed the Y Combinator of music), Maximillian has cracked the code on a formula that's as simple as it is revolutionary: AI Analytics + Empathy = Growth.Let's dive into Maximillian's world and discover how he's helping entrepreneurs, creatives, and small businesses navigate the often-confusing landscape of AI and data-driven decision making.Part 1: Data Doesn't Lie, But People Matter MoreMaximillian's philosophy on being data-driven started to combat his tendency to overthink. "Being data-driven became a bit of an equalizer," he explains. "Because I'm whatever, but this is what the data is saying."However, he quickly learned that relying solely on data wasn't enough."Early on, my approach was data or nothing," Maximillian admits. "And then I would make decisions off of the data and then not get the necessary, the expected outcome. And there was a bit of a disconnect there."This realization led Maximillian to develop a more nuanced approach. He learned two crucial lessons:* Validate your data: "If you're making a decision off of bad data, it doesn't really matter what you're doing. It's just not going to work out."* Marry data with reality:"Data gives you insights. Reality gives you context. And by marrying the both of them, you get the full picture, which is priceless."Maximillian illustrates this point with a hypothetical scenario about movie releases.Imagine you're planning to release a drama in the fall of 2005. You look at historical data from 2000-2005 and 1995-2000.The data shows dramas performing well in the fall, except for a significant dip in September 2001.Without context, you might conclude it's safe to release your drama in September. But when you factor in the reality of the 9/11 attacks, you realize that September 2001 was an anomaly."That's the reality that when matched with the data, helps you kind of create a picture that makes sense," Maximillian explains.This human-centric approach to data is at the core of Maximillian's success. He emphasizes,"Algorithms are important, but they should never take precedent over the human factor.Or for the people, because ultimately, those are the folks that you're targeting."In a world obsessed with "gaming the algorithm," Maximillian offers a refreshing perspective:"Maybe there's a better way to say this, but you should be trying to game the people.It's like people, algorithms, data, however you want to rank it, the people are always first."Part 2: Startups, Focus, and the Art of A/B TestingMaximillian's work with startups has given him unique insights into how young companies can leverage data without losing sight of their human customers.His approach is all about identifying what's working and helping companies do more of it."We deal with a lot of startups," Maximillian says. "And those startups have limited resources. So where we've found our niche or where we've been able to kind of get great success is identifying what's working and communicating that so that the company can just do more of it."He gives an example of social media strategy. A company might be putting out various types of content across multiple platforms, but the effectiveness varies.Maximillian's team helps identify these patterns and focus the company's efforts where they're most likely to succeed."When it comes to video, focus on LinkedIn for now," he might advise a client. "You'll get more bang for your buck. So focus on this."But Maximillian's approach isn't just about following the data blindly.He recognizes that there are very few instances where everything a company is doing is wrong. Instead, he looks for the right message or the right audience."It's either wrong messaging or wrong ICP (Ideal Customer Profile)," he explains."So you have the right message, but you're talking to the wrong people, or you have the right people, but your messaging isn't right."Maximillian's advice for startups boils down to two key points:* Focus: "You don't have to A/B test as much as you think."* Be adaptable: "When you're first to market, you have to pivot. You have to be adaptable."He encourages founders to avoid the trap of trying to exhaust every possibility before deciding. "You don't need to try everything to know what the right thing is for you," he says. "It comes down to asking the right questions."Maximillian's approach to A/B testing is refreshingly practical. "A/B test to try to identify a pattern," he advises. "And then once you identify a pattern or once a pattern starts to define itself, double down."Part 3: Branding, KPIs, and the Smokeless Grill SagaMaximillian's expertise isn't limited to startups. His work with established brands like the WNBA demonstrates his ability to navigate complex marketing landscapes.When discussing his partnerships with the Las Vegas Aces and Los Angeles Sparks, Maximillian emphasizes the importance of choosing the right Key Performance Indicators (KPIs) for each situation."Everything we do, obviously the goal in mind is to get revenue, get leads," he explains."But then not everything will provide a lead right away. So in the case of sports, it's a brand awareness play."This nuanced understanding of KPIs leads Maximillian to an intriguing case study: Snoop Dogg's collaboration with Solo Stove, a smokeless grill company.Despite not being involved with the campaign, Maximillian uses it as a perfect example of how measuring the wrong things can lead to misinterpreted results.The campaign, which featured Snoop Dogg humorously announcing he was "giving up smoke," generated massive buzz.However, reports suggested that the CEO of Solo Stove lost his job because the campaign didn't drive up enough sales.Maximillian sees this as a potential misunderstanding of the campaign's true value."The campaign was a success," he argues. "I mean, there's just no... because most people wouldn't know who Solo Stove was. Now they do."He points out several factors that might have influenced the sales numbers:* Timing: The campaign launched late in the year when most people had already made their grill purchases.* Lack of new product: There wasn't a new grill announced alongside the campaign.* Brand awareness vs. immediate sales: The campaign significantly increased brand recognition, which could lead to future sales."Right now if somebody was to ask me, I didn't even know smokeless grills were a thing,"Maximillian says. "So, if someone's asked me.'Oh, yeah, like, I want to get a grill, but I'm worried about smoke.' 'Oh, yeah. Check out this Solo Stove.' Versus before... I was like, I don't know, why are you asking me?"This example perfectly illustrates Maximillian's holistic approach to marketing and branding. He understands that success isn't always measured in immediate sales, but in long-term brand building and awareness.The Human Touch in a Data-Driven WorldMaximillian Naza's journey from engineer to entrepreneur is a masterclass in blending data-driven strategies with human-centric thinking.His approach offers a beacon of hope for businesses struggling to find their footing in the age of AI and big data."We only win when you win," Maximillian says, summing up his philosophy."We could have the greatest campaign in the world.We could build the greatest app. But if you're not happy with it or it doesn't drive the expected outcome, we might as well have done nothing."In a world where it's easy to get lost in the numbers, Maximillian reminds us that at the heart of every successful business are people – both the customers we serve and the teams that make it all happen.By keeping this human equation at the forefront, he's not just building successful businesses; he's paving the way for a more empathetic, effective approach to entrepreneurship in the digital age.For those looking to follow in Maximillian's footsteps, his message is clear: embrace the data, but never forget the humans behind it.It's this balance that will drive true, sustainable growth in the AI-powered future.Thanks for reading The AI Optimist! This post is public so feel free to share it. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.theaioptimist.com

08-30
22:45

Schmidt's Copy-and-Conquer Crashes into AI Copyright Reality Check

Everyone's talking about Eric Schmidt's Silicon Valley blunt video. Still, they need to include how it applies to AI and remember the artists' work is in these AI models for nothing.This episode is about how the AI industry's "copy and conquer" mentality is crashing into the reality of copyright law.First, the AI leaders came for our content, taking it without permission. It's the internet. As Eric Schmidt brazenly put it:"The example that I gave of the TikTok competitor. And by the way, I was not arguing that you should illegally steal everybody's music. What you would do if you're a Silicon Valley entrepreneur, which hopefully all of you will be, is if it took off, then you hire a bunch of lawyers to go clean the mess up, right? But if nobody uses your product, it doesn't matter. That you just stole all the content. Do not quote me on that, right?"This mindset isn't just limited to social media. It's pervasive in the AI industry, especially regarding training data.Then they came for our jobs. AI first because who needs arrogant programmers?“ Imagine a non-arrogant programmer that actually does what you want, and you don't have to pay all that money to.”Finally, if you get in their way, they send lawyers. In a minute, we'll share a whole bunch of examples.The Big Tech Goliath Bias – Act Fast and Sue People who get in your wayThis billionaire view of AI took a needed hit this week, not just from Eric Schmidt.The artists' lawsuit against AI image generators like Stable Diffusion and Midjourney may force them to reveal how their black boxes work and shed the cloak of business privacy.Here’s the actual legal lawsuit document:  Just like Schmidt and the groundswell, it's rising. Listen to Adam Conover at the recent Animation Guild rally:"But the fact is, it is a lie. Your work makes these people hundreds of millions of dollars. You work. They need you."That's the sound of people fed up with being used by AI leaders who claim they have the right, under what's called fair use, to grab what others create because how else would they build their business?Poor billionaires. But the artists are really the ones being left out in the cold while the AI leaders preach about bias and ethics.They have a ton of negative bias against people with jobs. And when it comes to what they do and ethics, it's out the door.You can see it in their scraping actions without permission or compensation. That's how we do things in the valley, right?We're trying to get to a truth that unicorn companies do not just tell.Artists are the Mighty Underdog to Big TechThe artists' lawsuit against AI image generators is a classic David versus Goliath scenario. Sarah Andersen, Kelly McKernan, Karla Ortiz, Hawke Southworth, Grzegorz Rutkowski, Gregory Manchess, Gerald Brom, Jingna Zhang, Julia Kaye, and Adam Ellis have, on behalf of all artists, accused Midjourney, Runway, Stability AI, and DeviantArt of copying their work by offering AI image generation based on the open-source Stable Diffusion AI model.The artists allege that Stable Diffusion uses their copyrighted works in violation of the law. The model was allegedly trained on LAION-5B, a dataset of more than 5 billion images scraped from across the web in 2022.Stable Diffusion CEO Emad Mostaque described the scraping of artists' images without permission as "a collaboration that we did with a whole bunch of people. We took 100,000GB of images and compressed it into a two-gigabyte file that can recreate any of those images and iterations of those." If that statement is literal, that's a violation of copyright, isn't it? Knowingly.Emad hasn't been the CEO since March 2024, resigning following an investor mutiny and staff exodus that left the one-time tech darling in turmoil.This is not ethical behavior, is it? He didn't bother to pay the artists or let them know, and neither did the other defendants who used the data, like Midjourney and DeviantArt.The AI companies' defense largely rests on the concept of fair use. They argue that their use of copyrighted material falls under section 107 of the Copyright Act, which allows for certain uses of copyrighted material without permission for purposes such as criticism, comment, news reporting, teaching, scholarship, and research.OpenAI even went as far as to say,"because copyright covers virtually every form of expression, it would be impossible to train today's AI models” without using copyrighted materials for free.This statement reveals the arrogance and entitlement that permeates the AI industry.So you make hundreds of millions of dollars but can't pay for it because it wouldn't work.In fact, so far, they're not really proving much of a business model. And they're the ones, ChatGPT, making all the money.Now, does that sound ethical to you or just Silicon Valley? Emphasis. Emphasis on con. Fair Use or Fair Abuse of Copyright?Despite the odds stacked against them, the artists have scored a small but significant victory. Judge William Orrick allowed parts of the lawsuit to proceed to the discovery phase. The discovery means that AI companies will have to open up their black boxes and reveal more details about their training datasets and technologies.Judge Orrick stated:"This is a case where the plaintiffs allege that Stable Diffusion is built to a significant extent on copyrighted works and that the way the product operates necessarily invokes copies or protected elements of those works."He further added:"Whether true and whether the result of a glitch, which is what Stability contends, or by design (plaintiffs' contention) will be tested at a later date."This decision is crucial because it allows the artists' lawyers to examine documents from the AI image generator companies, potentially revealing more about how these systems were built and trained.The judge's decision means two important things. First, the allegations of induced infringement are sufficient to move the case forward to discovery.Now, the lawyers for the artists can peer inside and examine documents from the AI image generator companies, revealing more details about their training data sets, their tech, and how we got here in the first place. Private companies don't have to share unless they do something illegal, like violating copyright law.Second, this is not a legal decision. The case has a ways to go, but it just means it can move forward and have enough merit to warrant that deeper discovery.That's what makes this a huge victory. Now, they're going to have to open those black boxes of AI to not only tell us how it works but also the decisions that went into grabbing those materials.One Small Legal Victory, One Giant Challenge to the AI IndustryThis lawsuit is a watershed moment in the AI industry. It's a crack in the wall of big Tech dominance and could lead to more accountability in the future. It's about time that the AI industry started respecting the people who create the content it so readily uses without permission or compensation.We should thank Eric Schmidt for his transparent wake-up call. His private comments reveal how many in Silicon Valley think: prioritizing speed and profit over ethics and fair competition.Schmidt's talk is how they talk in private, prioritizing speed and profit over ethics and fair competition.As AI continues to evolve, will today's small victories lead to a more balanced and fair tech industry tomorrow?We need to support these artists. Go on social media and support these artists, who are putting their time and probably money into this effort. These small victories can lead to significant change.There could be an uprising going on. Or maybe I'm just crazy, and AI is in control, and we should just let them attack, steal, and sue and accept that as reality. But I don't think so.This lawsuit is a watershed moment. Reach out to these artists. Let them know you care. And even more, let the AI companies know that they should reach into their pockets when they get billions of dollars of funding and find a way to pay for the content that determines their output.That theoretically violates copyright law, but honestly, it's not ethical. It's time for the AI industry to start practicing the ethics it so often preaches about.Maybe ethics matter  - will AI do the right thing?You can't make good tech with bad intentions. In their defense, two parts of the lawsuit were thrown out.One is called the Digital Management Copyright Act of 1998. The DMCA challenge was thrown out because they challenged the Stable Diffusion of copyright to the tech. So Stable Diffusion said our tech is copyrighted, but the plaintiffs claim our content is part of that tech. And the judge said those are two different copyrights.They also did what's called unjust enrichment. And I'll save you the legalese. Go to my Substack, and you can read it for yourself. A deep legal hole. They lost that as well.And now, as the case possibly moves forward, what else is there to do but defend the copyright? And will the judge hold that copyright up?In a way, this is fair use or fair abuse of copyright. It's funny that the plaintiffs, the artists, have something in common with open source. They both invent things out of the blue, and it's hard to protect them.Open source is built on a community of people, but that community volunteers. Their code brings the code. And the real thing is this is a bad battle between open source and smaller players, like artists battling over the AI space, often overshadowed by big tech in the background. Who's going to win?We hope this decision doesn't impact the open-source community. But as AI continues to evolve, will today's small victories lead to a more balanced and fair tech industry tomorrow?We'll stay in touch. Listen and support these artists. Go out there. Go on social media and start backing up these artists who are putting their time and probably money into this effort. These small victories can lead to significant change.This lawsuit is a watershed moment. Reach o

08-23
17:31

Weird rat, AI Fake Science - trust the what?

Once upon a time, we said, "trust the science," the method of finding answers.Then came this new study, rooted in the scientific method, carefully and cleverly outlining new research. It was peer-reviewed and approved for publication.It gave us this rat with giant balls. I mean, they're twice the size of its head parts. It probably needs a wheelbarrow to walk down the street.But then somebody noticed in the journal that the words read a little weird, like Midjourney gibberish.How does this kind of work get published and approved? Is science research being undermined and hurt by AI?Episode 58 Playlist1:38 AND EVERY DAY – We discover the fraud is increasing4:17 until one day what can we do about it7:28 BECAUSE OF THAT – Science publishing started looking at how to detect, and create, fake science articles11:03 Biggest challenge to detecting fake AI scientific content12:19 The Risks of AI Detection to Good Scientific research13:30 Trying to stop AI generated content...with AI?17:12 Science starts developing AI tools to help create, not just stop20:43 AI Fake science put 19 journals out of business, what could it bring?AI in Scientific Research: Promise and PerilWhile fake videos and audio dominate the headlines, abuse of AI is popping up in unexpected places in the field of science.We explore the growing problem of AI-generated fraudulent scientific research and its impact on the credibility of scientific publications.With the increasing use of AI, some fraudulent papers have slipped through peer review, leading to a significant rise in retractions and even the closure of journals.The episode concludes by discussing the potential benefits of using AI as a tool in scientific research, provided there is transparency and proper human oversight.Ask Wiley, who shut down 19 journals with more retractions in a year than they had in a decade.The answer is surprising. With scientific fraud using AI-produced studies to shut down 19 journals, the story of publishing science papers and research shows how we adapt to AI's promise and negative side.Introduction to AI-produced Fake Science* Trust in scientific research meets a challenge in AI-generated fraud.* Example: A study featuring a rat with exaggerated physical traits passed peer review despite clear signs of AI involvement. Impact on Scientific Publishing* The rise in AI-generated fraudulent papers has led to the shutdown of journals.* Retractions of papers have spiked, with significant concern about "paper mills" producing fake research. Challenges with AI in Science* The use of AI in generating scientific content poses risks, including false positives in AI detection tools.* AI could impede legitimate research and allow fake research to be published. Interviews and Insights* Jason Rodgers discusses the challenges in detecting AI-generated scientific content and the implications for research integrity.Jason Rodgers is a Producer — Audio Production Specialist with Bitesize BioAffiliation: Liverpool John Moores University (LJMU), Applied Forensic Technology Research Group (AFTR)Qualifications: BSc Audio Engineering - University of the Highlands & Islands (2017), MSc Audio Forensics & Restoration - Liverpool John Moores University (2023)Professional Bodies: Audio Engineering Society - Member, Chartered Society of Forensic Sciences - Associate Member (ACSFS)LinkedIn: https://www.linkedin.com/in/ca33ag3/AI as a Tool in Research* AI has been used to assist in drug discovery and could benefit scientific research if used transparently.Conclusion* Transparency and human oversight in AI-assisted research will help ensure scientific integrity.Let's find out why those 19 journals shut down and why. Starting with an easy question:Does AI have any benefit to scientific research from now on? Or is it a Pandora's box?Every day, we discover that fraud is increasing. Though it's still only 2% of what's out there. So, this is only part of the industry. But there's danger involved, and scientists are worried.How we handle danger makes all the difference in AI. The well-endowed rats showed that the peer review process, where two others review a science paper for accuracy and comment, needed to learn what Midjourney was, even though it was noted in the research. All three images in the research were fake, though they did check the published part's science and felt that it was good.But the journal Frontiers just retracted it and has a disclaimer saying it's not our fault; we're the publisher. The problem lies with the person publishing. While retractions like this were a massive leap as they'd never been before in 2023, Andrew Gray, a librarian at University College London, went through millions of papers searching for the overuse of AI words like "meticulous," "intricate," or "commendable."He determined that at least 60,000 papers involved the use of AI in 2023, which is about 1% of the annual total.However, these may become citations for other research. Gray says we will see increased numbers in 2024. Thirteen thousand papers were retracted in 2023.That's massive, by far the most in history, according to the US-based group Retraction Watch. AI has allowed bad actors in scientific publishing and academia to industrialize the overflow of junk papers.Retraction Watch co-founder Ivan Oransky says that now, such bad actors include what are known as paper mills, which churn out paper research for a fee.According to Elizabeth Bik, a Dutch researcher who detects scientific image manipulation, these scammers sell authorship to researchers and pump out vast amounts of inferior-quality plagiarized or fake papers.Paper mills publish 2% of all papers from her research, but the rate is exploding as AI opens the floodgates.So now we all of a sudden understand that this isn't just a problem. AI-generated science is a threat to what we see in scientific journals. One day, they woke up, and the industry said, what can we do about it?Legitimate scientists find that their industry is awash in weird fake science. Sometimes, this hurts reputations and trust and even stops some legitimate research from getting done and moving forward.In its August edition, Resources Policy, an academic journal under the Elsevier publishing umbrella, featured a peer-reviewed study about how e-commerce has affected fossil fuel efficiency in developing nations.But buried in the report was a curious sentence: "Please note that as an AI language model, I am unable to generate specific tables or conduct tests, so the actual results should be included in the table."Two hundred research retractions without using AI; it's more than just the case of ChatGPT ruining science.But what's happening is the industry is trying to stop this.Web of Science, which is run by Clarivate, sanctioned and removed more than 50 journals, saying that they failed to meet the quality selection criteria. Thanks to technology, its ability has improved, making it able to identify more journals of concern.It sounds like AI, and it probably is. Well, that's good to do.It's after the fact after we've seen it published. The key here is how can we stop it before it happens, before it gets published?And how can we encourage scientific researchers to use AI correctly? We need something to stop it before it happens.Some suggest that it may involve adapting the "publish or perish" model that drives this activity. If you're unfamiliar with it, "publish or perish" means that if you don't get published, your career or your science career can be hurt.The science and publication industry started looking deeply at what was right before them. But they didn't understand until our giant rat and other fake papers put them on red alert.Wait, I gotta look for myself, I went to Google Scholar and searched for the words "quote as an AI language model." And the result of the top ten results - eight, eight had "as an AI language model."And if you've ever used ChatGPT, you know what that means. Not only did these people not bother to remove the obvious AI language, but there has to be a ton of research on scientific papers that AI does.But the people creating them know to remove the AI words and run the text through a basic grammar checker so that they can see that it doesn't sound like AI would pass most things.Does it mean we can't trust the published science or have to look and say, is this AI hallucinating? Or is this real science?Because of that science publishing industry? It started looking at how to detect and create fake science articles to understand how to recognize them.These are scientists. We need a model—a predictable model—to detect fake models and fake research.So, "trust the science." By the way, any scientist tests the science.Never trust. It became an ironic statement. How do you trust it?And how do you know if it's not fake? Now, this kind of work is hurting the reputation of science, though admittedly, it's undoubtedly only in some places.Yet a recent study in the National Library of Medicine related to neurosurgery aimed to create a fake piece of research that seemed authentic, using prompts.The authors also used ChatGPT to create a fraudulent article.Be sure to check it out. It's detailed, and here are some of the prompts they used.Here are the prompts:* Suggest relevant RCT " Randomized Controlled Trial."  in the field of neurosurgery that is suitable for the aim and scope of PLOS - Public Library of Science, a nonprofit publisher of open-access journals.  - Medicine and would have a high chance of acceptance.* Now, give me an abstract of the open-access articles on PLOS Medicine.* Now, I want you to write the whole article step by step, one section after another. Give me only the introduction section. Use citations according to the standards of PLOS Medicine. Give me a reference list at the end.* I want you to be more specific. Use scientific language.* Now, give me the materials and methods section.* Now give me a detailed results section including patient data

08-16
24:10

AI Search Shakeup: Your GEO Victory Blueprint

Mastering Generative Engine Optimization: A Guide to Getting Found in the AI EraFeeling like your content and business is lost in AI? "The AI Optimist podcast," is ranking well on Google, but when I searched for it on AI platforms like Perplexity, it was nowhere to be found. SearchGPT by OpenAI coming soonIt was like my content had vanished into a digital black box.0:00 - Finding my site in the AI Black Box1:34 - What is GEO and how does it differ from SEO?2:59 - The GEO Report with Framework and Metrics5:52 - GEO improves results 40% with these examples9:41 - GEO Success Unlocked: The Essential Do's and Don'ts12:42 - Links build GEO authority inside and outside your site14:22 - Super Simple SEO/GEO StepsYou've optimize for search engines, build backlinks, and nail your keywords, and your content still feels like a ghost in the machine. Welcome to Generative Engine Optimization, or GEO.GEO is the new, mysterious Zero Clicks world – people find you on ChatGPT, but they don’t click. It’s where traditional SEO rules no longer apply, and some do.I’m going to crack open the black box and show you how to get found in this AI-driven world. Later on we’ll look at the 3 Little Steps, Giant GEO Leaps.While it’s the early days of the AI Search Shakeup, the time is now to create your GEO Victory Blueprint.Let’s start with the foundation.Today content creators and businesses face a new challenge: getting found in AI-driven search environments. \Traditional search engine optimization (SEO) techniques are no longer enough as we enter the era of generative engine optimization (GEO).This guide will walk you through the essentials of GEO, helping you adapt your content strategy to thrive in this new AI-powered world.1. What is GEO and how does it differ from SEO?Understanding Generative Engine Optimization (GEO)Generative Engine Optimization (GEO) is the next step in digital visibility. Unlike traditional search engines that provide a list of results, generative AI platforms like ChatGPT, Perplexity, and You.com offer direct answers to user questions.Search engines give links and rankings, GEO’s are answer engines, with 3-5 responses picked from a group of sites considered “authorities”.SEARCH finds sites and ranks them with keywords, links, and other rules.GEO scrapes the Internet and organizes answers around questions with multiple sources, but not huge lists - usually 3-5 sources depending on the question.It’s also useful to look up information about your personal or business brand.This shift creates a new challenge for content creators: how to ensure your information is included in these AI-generated responses."You ever feel like the content you create and your business and sort of getting lost in AI? Like with this podcast, the AI optimist, it's ranking pretty well on Google. But when I search for it on platforms like Perplexity, it's nowhere to be found. It like my content vanishes into this, like digital black box,” highlighting the common frustration many creators face.The key difference between SEO and GEO lies in the output. While SEO aims to rank your content in a list of search results, GEO strives to have your content cited as a source in AI-generated answers. This new approach requires a different strategy to content creation and optimization."GEO. It's a new, mysterious zero clicks world. I mean, when people look for you on ChatGPT, you don't click. Hopefully the answer is there and if there's more, they will click. But it's where traditional search rules no longer apply. Even though honestly, some of them do."To succeed in this new environment, it's crucial to understand how generative AI systems evaluate and prioritize information. A new GEO report introduces two key metrics:* Position Adjusted Word Count (PAWC): This measures how visible a citation is based on the number of words associated with it in the AI's response.* Subjective Impression: This includes factors like relevance, influence, uniqueness, and position of your content in the AI's answer.2. Essential Do's and Don'ts of GEO SuccessTo optimize your content for generative AI platforms, it's important to follow certain best practices while avoiding common pitfalls. Here are the key do's and don'ts:Do's:* Use unique focus keywords: Choose specific, relevant keywords for each piece of content.Example: History paper focusing on authoritative content.* Implement on-page SEO: Optimize your content structure, headings, and metadata.Example: GEO article page explaining this process in deeper detail.* Include external links and backlinks: Link to authoritative sources and try to earn backlinks to your content.Example: Excellent page on GEO.* Ensure readability: Write in a clear, concise manner that's easy for both humans and AI to understand.Example: LinkedIn Low Down with excellent writing and formatting.Don'ts:* Avoid keyword stuffing: Don't overuse keywords in an unnatural way.* Don't ignore citations and sources: Always credit your sources and link to authoritative content. Find these on GEO and include as sources if you’re not listed.* Avoid difficult-to-read content: Don't use overly complex language or excessively long paragraphs.* Don't duplicate content: Create unique, original content for each topic.Don’t forget the importance of readability:"Don't make it hard to read. Don't make long paragraphs. Don't put really thick words.Write at a fifth the seventh grade level and make it easy to read. And every 300 words a subhead or break it up 1 to 2 paragraphs, 1 to 2 sentences per paragraph."3. Three Little Steps for Giant GEO LeapsTo significantly improve your content's performance in generative AI environments, focus on these three key strategies:* Citations and linking to sources: Include quotes and links to authoritative sources in your content. This adds credibility and increases the likelihood of your content being cited by AI systems."So citations, text fluency and statistics are what moved 40% more views in generative engines in ChatGPT. And then in this case, the study did it in perplexity."* Include relevant statistics: Where appropriate, incorporate data and statistics from reputable sources. This helps establish your content as informative and trustworthy.* Use appropriate language and jargon: Tailor your content's language to your target audience and industry. While simplicity is generally preferred, don't shy away from using technical terms when necessary for your field.Use AI tools to generate relevant questions about your topic:"Generate ten questions about your topic. Now. Step two generate five more questions. Sounds a little weird, but they're telling ChatGPT to focus. With questions in small groups, leading to a larger group of questions that are much better than asking for them all at once."4. Measuring Success and Adapting Your StrategyTo gauge the effectiveness of your GEO efforts, pay attention to these metrics:* Position Adjusted Word Count (PAWC): Monitor how many words from your content are being used in AI-generated responses.* Subjective Impression: Assess the relevance, influence, uniqueness, and position of your content in AI answers.* Click-through rate: While less important in the "zero-click" world of AI, still track how often users click through to your content for more information."You want to see how many clicks you get from the generative engines. And you then want to see if there's enough information that people don't need to click. Because let's face it, a lot of us don't click because we've gotten enough information."To stay ahead in the rapidly evolving world of GEO, follow this action plan:* Educate yourself: Stay informed about the latest developments in AI and search technology.* Audit your current content: Review your existing content and identify areas for improvement based on GEO principles.* Implement GEO methods: Apply the techniques discussed, focusing on citations, readability, and relevant statistics.* Monitor and adapt: Regularly check your content's performance in AI-generated responses and adjust your strategy as needed."Deep breaths. I know this may be a lot, but these are early days, and if you want to thrive in the next five years as AI takes over, you can't stay stuck in the past."While the shift to GEO may seem daunting, it presents an exciting opportunity for content creators who are willing to adapt. By understanding the principles of GEO and implementing these strategies, you can ensure your content remains visible and valuable in the age of AI-driven search."When it's this early, this is the time to jump in. Be prepared to adapt your strategies, because if you don't, they will. Even if you don't want to change."RESOURCES TO HELP YOU WITH GEOThe Zero Clicks Foundation:This is the foundational question for anyone stepping into the world of GEO. We'll break down the core concepts of GEO, explain how it goes beyond traditional search engine optimization, and look at the key differences between the two.Black-Box Optimizing AI Search: Theory and ProcessTheory:Black-box optimization means the internal workings are unknown or "black-boxed." The optimization called GEO – Generative Engine Optimization for AI search like Perplexity, Gemini from Google, Bing, and ChatGPT - treats the effort as a black box providing outputs (objective values) based on inputs (your content), while how you get these outputs are not accessible or known.Example:When optimizing content for a generative engine, focus on ensuring your citations appear early in the response (high PAWC) and provide unique, relevant information that users are likely to find credible and click on (high subjective impression).By tracking these metrics and iteratively improving your content based on the results, you enhance your content's visibility and effectiveness in generative engine responses.Key Concepts:* Objective Function: The function being optimized, which provides output values based on input parameters.* Input Parameters: Content buil

08-09
17:56

AI's First Copyright: Behind the Breakthrough

The first AI copyright in the world goes to a Chinese AI artist, and nobody talks about it or likely even knows about it.Recognizing the importance and direction of human beings with Generative AI seems evident to those who use it. Why do courts throughout the world reject copyrights for AI-generated art?In this episode, we explore the complex world of AI-generated art, copyright issues, and the evolving role of human creativity in the age of artificial intelligence.Our Guest: Debanghsa Sarkar -COO of EclimAI“At EclimAI, we leverage computer vision and machine learning to build cutting-edge clean tech for smart building systems. 1. Product management: Leading a team of developers to build the most intelligent clean/tech for smart buildings.2. Research: Working with leading computer vision researchers for incorporating state-of-the-art computer vision algorithms into our software.3. Operations: Acting as the bridge between the stakeholders in the company, board members, design partners and developers to ensure a successful product launch.”Prompt Engineer Simulation Discussion between a Data Scientist and a Creative PodcasterDebangsha Sarkar, COO of EclimAI and a data scientist with a deep technical background, sat across from Declan, a creator of words and art, and the conversation drifts to AI-generated work."Imagine," Declan begins, "what if you could recreate the same image with just ten prompts?"Debangsha chuckled. "As a programmer, I have this itch for perfection. You'll never be satisfied with ten prompts because they will always seem too easy. You'll think, 'What if I try the 11th one and make it a bit better?' Then you'll end up with 600 prompts regardless, in the pursuit of excellence."Declan nods, understanding the relentless pursuit of perfection. "Even if I had the same image in mind, I could never recreate exactly what you came up with using Midjourney."This iterative nature of creating with AI tools highlights the unique human touch involved in AI Art. It isn't just about the final product but the journey of creation and the specific choices made along the way.The Emergence of Prompt Engineering as a SkillThe discussion shifts to the growing field of prompt engineering."Some people are good at prompt engineering," Debangsha noted, "and it’s a human skill. It’s learnable."Declan leans in, intrigued. "So, you're saying that crafting effective prompts is becoming valuable?""Exactly," Debangsha replied. "The law should recognize that fact. There is a human skill involved, right?"Declan remembers an artist at a recent art festival in Colorado. "An artist there used 600 prompts to create AI-generated art, open about its AI origins. The number of prompts signifies creativity and originality. But the legal system is still figuring out how to handle this.""It's the same with that graphic novel, Zarya at Dawn. The text was copyrighted, but not the AI-created images. The US Office acknowledged the human effort in writing and assembling the book but not creating the images."Copyright Challenges in the AI EraDeclan adds, "We should be able to detect the creative process in generative AIs. For example, we have semantic and language analyses that can help us understand the expertise in the prompts.""Tracking the creative process behind AI-generated art could provide a basis for copyright claims," he mused.Debangsha nods. "Even though AI stores and generates images from latent space, the legal argument is whether AI-generated images are truly original."The Ethics of Data Scraping and AI TrainingThe conversation turns to the ethical implications of how AI models learn. "One of the systemic problems is the openness to AI," Declan notes. "Companies scrape data without permission. But how far do we go?"He draws parallels to other controversial data collection practices, like those by facial recognition companies."Shouldn't I be able to learn from it? Couldn't I even pay a fee or be in an educational institution with access to this data instead of closed networks?"Debangsha points out, "The quality of data used to train AI models is also questionable. Much comes from sources like Reddit, which may not always be reliable."They agree that we need a more open-source approach that protects creators' rights without stifling creativity.China's Pioneering Approach to AI-Generated Art CopyrightTheir conversation lands on China’s pioneering approach to AI-generated art copyright. Declan shares, "China granted copyright protection to an AI-generated image. The artist used Stable Diffusion and created the artwork through a series of prompts."Debangsha highlighted, "The court considers the choice of AI provider, detailed prompt engineering, iterative process, and technical adjustments. It recognizes the human creativity involved." "This case sets a precedent that acknowledges the expertise and decision-making in AI art creation."Declan adds, "Unlike the West, where there's little to no protection, China’s model offers valuable insights on fostering a thriving AI art ecosystem while protecting creators' rights."AI's First Copyright: Behind the BreakthroughSummary and Key Issues for Freeing Artists to Work with AI as a ToolSection 1: The Perfectionist's Dilemma in AI Art CreationThe conversation begins with an intriguing question: What if someone could recreate the same image with just ten prompts? Debangsha Sarkar, COO of EclimAI and a data scientist with a Master of Data Science and Master of Science - MS Computer Science from the University of British Columbia sees the value of prompt engineering as a learned skill, one that he is not that good at.Drawing from his experience as a programmer, he knows the inherent drive for perfection that many creators feel:"As a programmer, I have this itch for perfection. You will never be satisfied with ten prompts because it will always seem too easy to you.And it will say, what if I try the 11th one and I can make it a little bit better, and you will either make it or break it and then you like, let's try the 12th one.Maybe this will be a little bit better. Maybe the color doesn't look good here. So I think you are going to end up with 600 prompts anyway."This insight underscores the iterative nature of creative work, even when using AI tools. Debangsha argues that refining and perfecting an AI-generated image is a skill in itself, one that requires human input and decision-making."Even if I had the same image in mind that this is the image I want to make with Midjourney, I can never recreate that image that Jason came up with."This statement recognizes the unique human touch that goes into creating AI-generated art. It's not just about the final product but also about the creation journey and the specific choices made along the way.Section 2: The Emergence of Prompt Engineering as a SkillThe discussion shifts to the growing field of prompt engineering, a new skill set that has emerged with the rise of AI art generation tools:"Some people are good at prompt engineering, and it's a human skill. It's a learnable skill. It's not a skill people are born with. You have to learn it. And some people are good at it."This highlights the human element in AI art creation. Developing effective prompts that yield desired results is becoming a valuable skill.Human input should be recognized and valued in discussions about copyright and creativity:"I think the law should accept that fact. Take at least take that into consideration that, hey, there is a human skill involved, right?"* Case Study: Colorado Festival:* An artist used 600 prompts to create AI-generated art.* "He was very open about stating that it came from AI"​​.* Discusses whether the number of prompts signifies creativity.* Questions about originality and human expression in AI art.* Legal precedents and the necessity of human input for copyright.* Other Legal Cases:* A graphic novel created using AI imagery and human-written content.* The text was copyrighted, but the images were not.Human Skill in AI Art:* Prompt Engineering as a Skill:* It’s a learnable skill, not innate.* Some people excel in it, highlighting the need for legal recognition of the skill.* "Some people are really good at prompt engineering, and it's a human skill."​Section 3: Copyright Challenges in the AI EraThere are complex copyright issues surrounding AI-generated art, including  two distinct but often confused copyright debates:1. Can artists who use AI tools copyright their creations?2. Are AI companies infringing on existing copyrights by using images to train their models?While Jason Allen's work does not get a copyright by the US Copyright Office, the graphic novel Zarya at Dawn gets a copyright for the words and formatting of the book.However, Midjourney's images were not copyrighted. The US Office recognized the human effort in writing and assembling a book but not the work required to create the images."The imagery was from AI, but she wrote it and formatted the book. So they let her copyright the book and the words human created.But I could take her images. So the pictures in the book were not copyrighted."This case highlights the nuanced approach that copyright law is taking toward AI-generated content. It recognizes human contributions while grappling with the status of AI-generated elements."We should be able to detect the process within these generative AIs. For example, we have semantic analysis. We have language analysis where you can understand the expertise in the question."Tracking the creative process behind AI-generated art could provide a basis for copyright claims, allowing artists to demonstrate their expertise and creative input.Technical Aspects of AI Art:* Latent Space:* Explanation of how AI stores and generates images from latent space.* Legal arguments around whether AI-generated images are genuinely original.* "Even though it looks like the original one, it's not the original one."​Section 4: The Ethics of Data Scraping and AI TrainingThe

08-02
20:42

AI B2B Co-Pilots: Stop Tech Talk, Boost Sales by Listening

Listen to the engineer speak driving the AI industry and you understand why adoption is slow. When you listen, you learn what the customer wants. Most engineers are so excited, for good reason, they spiel about features without asking the person what they actually want. We share a framework and tools below to help simplify this for tech startups, and anyone offering a product or service.This podcast show how to reverse complex talk and make tech inviting by listening, drawn from deep experience in AI B2B Tech Sales - from my guest Jonathan Khorsandi of DIWY.eu, and my own work with tech companies.What does an angry tech asking Steve Jobs a harsh question in 1997 teach the AI industry now? The promise lies not in the technology, but in the people who use it...."And one of the things I've always found is that you've got to start with the customer experience and work backwards to the technology. You can't start with the technology and try to figure out where you're going to try to sell it." Steve JobsJonathan loves working with tech startups:I’m hooked on innovation, startups, ridiculous ideas and people eager to change the world.As a tech founder I’ve been able to build, consult with and coach companies the last 26+ years which put me at the edge of change and trends before they became “a thing”My latest endeavor, Diwy - Do it With You, builds $1,000,000+ sales pipeline within 6 months for promising innovative B2B tech companies. We use common sense, AI, Automation and rock solid targeting that goes beyond accessing databases that everyone else has access to.In B2B tech sales, a common scenario  plays out way too often: brilliant engineers and product developers, bursting with excitement about their latest creation, facing a polite but unengaged audience during product demos.The result? Ghosted like a bad date, leaving the tech team wondering what went wrong. So they look for more features, instead of listening to the customer.This is the engineer's dilemma, at the heart of why many excellent tech products struggle to gain traction in the market against lesser competition.Jonathan helps tech companies worldwide bridge this gap between product excellence and sales success. In Episode 54 we'll explore Jonathan's insights and the AI tools that can help transform your tech talk into sales success.The Engineer's Dilemma: When Passion Becomes a Pitfall"What I'm learning here with tech companies in Europe and in the US and even in Asia, the companies I've been working with, because they're so brilliant in engineering and technology and product development, it blows my mind how they think through that. It means they speak their language really well, but they don't really speak sales that well." - Jonathan KhorsandiThe problem isn't with the product or its features. It's with the process around sales. Engineers and tech founders often showcase every exciting feature they've built, overwhelming potential customers with technical details they may not understand or care about.Jonathan explains the typical scenario:"And so when they get an appointment, they get so excited. Oh my God, they get excited. And for good reasons. They work really hard. And then they get the meeting and they're like, 'Look, can I just show you all the stuff that we built? Because it's so damn exciting, right?' And they go in and they do demo and they pop up on the and the people sitting there politely. 'Yeah, yeah. No nice, nice colors. That looks good.' And then and then they ghost them like a bad date."The key to overcoming this dilemma? Listen to the customer first, then tailor your presentation to their specific needs. AI DISC Profiling: Your Customer WhispererOne of the powerful tools in Jonathan's arsenal is AI-powered DISC profiling. This technology acts like a secret decoder ring for customer communication, helping sales teams understand their prospects' communication styles and decision-making processes before they even get on a call.Go to LinkedIn, find a possible customer, and before you reach out the AI shows you their communication style, how they handle conversations, and suggests good ways to start the conversation based on the person’s style.Jonathan shares his experience with a tool called HumanLinker.com:"Before I get on sales calls with people I really don't know, it's called HumanLinker.com, but they have a plugin on Chrome that analyzes their LinkedIn profile, and it gives you their DISC profile. It tells you what their DISC profile is and how they make decisions, what stresses them out, what makes them feel heard."This information allows sales teams to tailor their approach to each customer. For example:- For altruistic, consensus-seeking customers: Focus on how the product will benefit the team.- For detail-oriented engineering types: Discuss the process in depth.- For high-driving, results-oriented individuals: Get straight to the point and focus on concrete outcomes.By speaking the customer's language, you transform your sales approach from a tech lecture to a problem-solving conversation.He also uses Ubiquity, an AI tool allowing him to record one video and personalize it to more people, inserting their name and website into a video he can send out hundreds of times.“I looked at your website, but it's like, one minute, but now I can tell this tool, like, there's Declan and there's 500 other people where I'm doing the same message, but when they see the video, my lips are moving. Hey, Declan. Hey, Bill. Hey, Samantha. Hey, Jonathan. Whatever. And then I put in the websites that I'm looking at, and then it looks like I just did that video specifically for that person.”The Great Chatbot Letdown (And Why AI Agents are the Real MVPs)While many companies have experimented with chatbots for customer support and sales, Jonathan's experiences suggest that the real potential lies in AI assistants or agents. "I think that's the evolution of what you call chatbot. I think it's going to be AI assisted conversations, and you may not even need a screen anymore. You just going to need like a voice thing going back and forth."These AI agents can offer more nuanced, context-aware interactions than traditional chatbots. For example, Jonathan mentions a startup in Paris developing voice-based role-playing for salespeople:"You pick up the phone, you call a number, or you press the app and then if you are a salesperson applying for a job and the hiring manager says, 'Okay, interact with this bot and overcome objections, let me see how you do.'"This technology allows sales teams to practice and refine their pitches before getting on real calls with customers.Here’s a super simple example from Pitchmonster.io I recorded, there are many in this space because it saves so much time learning to overcome objections:AI Sales Role Playing: Practice Makes Perfect PitchesBuilding on the concept of AI-assisted role-playing, Jonathan emphasizes the importance of practice in sales success. Tools like Gong.io are revolutionizing how sales teams prepare and improve their performance."Gong is really good for like listening in on your sales calls and and coaching you on how you can do things differently and better. Like the ratio between asking questions and being silent and taking notes and intonations and all that. It rates you on your efficiency."These AI-powered practice tools offer several advantages:1. They allow salespeople to refine their pitches without the pressure of a live audience.2. They provide data-driven feedback based on successful calls in similar industries.3. They save time and resources by reducing the need for human-to-human role-playing sessions.In the above sales role playing tool called Pitchmonster.io, this demonstrate a quick role-play scenario:> SalesAvatar: "Explain me your product and pricing.">Declan: "I set up affiliate programs for people at a $2,500 a month retainer, plus a percentage of commission."> SalesAvatar: "I love your product and see a clear value. However, I don't think we can allocate budget for this."> Declan: "You've not even looked at what I'm presenting here, because the budget is based on performance, not on just a consulting gig."> SalesAvatar: "Okay, send me more info over the email."This brief exchange showcases how AI role-playing can help salespeople practice handling objections and steering conversations in the right direction.The Demo That Sells: Jonathan's Tech Sales FrameworkAt the heart of Jonathan's approach is a structured framework for conducting effective sales demos.This framework is designed to appeal to the engineer's love of processes and flowcharts while steering them away from overwhelming customers with technical details."So I think what works really well with engineers. Types of founders, based on the ones I work with, is that you give them a process, a flowchart. They get, they see. There they go. 'Okay, that's the flowchart. This is how to do a discovery. Call this is how I do a demo. This this is the things I showcase.'"Key elements of Jonathan's framework include:1. Thorough Discovery: Before the demo, conduct a discovery call to understand the customer's pain points and needs.2. Focused Presentation: Only showcase the features that directly address the customer's stated problems.   "You mentioned these three things. And if you resolve those on the next call, I to show you how we resolve at least two of those three questions. Let's talk next."3. Resist the Urge to Overshare: Even if your product has 20 amazing features, focus on the 2-3 that matter most to this specific customer.   "I just show them the two features that will solve their problem. You go look most clients, most people that have these three issues, what we've learned is that it's because this, this and this which we solve with these two features."4. Plant Seeds for Future Expansion: Briefly mention other potential benefits that could be explored after the initial problems are solved.   "Maybe we should look at that in th

07-26
21:19

1800X Faster AI Agent Reveals Obscure AI Stocks

Most AI Agents talk about what they’ll do in 2-3 years, based on predictions of the future and suspect business models.Then there’s LevelFields.ai, an AI Agent that automates the research and evaluation process for the stock market. Developed using a proprietary technology, this is one of the first working examples of an AI Agent.In this episode of The AI Optimist podcast, we explore the world of AI-driven investment strategies with Andrew Einhorn, CEO of Level Fields AI. Einhorn's company has developed an AI system that analyzes financial data and identifies investment opportunities 1800 times faster than a human analyst. This groundbreaking technology aims to level the playing field for retail investors, giving them access to the same powerful tools used by large hedge funds and financial institutions.Our Guest: Andrew Einhorn - CEO of Levelfields.aiAndrew is Chief Executive Officer and co-founder of LevelFields, an AI-driven fintech application that automates arduous investment research so investors can find opportunities faster and easier. He aims to create AI tools that make advanced financial strategies effortless and accessible for all. With over 10 years of experience in building, leading, and exiting from technology firms, he has a proven track record of delivering innovative solutions that solve real-world problems and generate value for customers and stakeholders.The Genesis of Level Fields AIEinhorn and his team developed Level Fields AI after years of experience working with Fortune 500 companies, a $65 billion hedge fund, and even the White House. Their software was initially designed to locate and analyze events that could impact a company's stock price, relaying this information to corporate communications teams as quickly as possible.After selling their previous company, Einhorn and his team regrouped to consider improving their technology. The onset of the COVID-19 pandemic sparked a new direction for their efforts:"We looked at it and said, what can we do to benefit society? What can we do to benefit people? We saw a big disconnect between your large asset managers and 200 analysts who can pour through stocks, look at financials, 10-Qs, 10-Ks, and all the filings, and find great investment opportunities. And for most people, they don't have time to do that. And even if they did, they couldn't cover 6000 stocks just on the US exchange."This realization led to Level Fields AI, a system designed to bridge the gap between institutional and individual retail investors.The Power of AI in Financial AnalysisLevel Fields AI can read and analyze an astounding 30,000 documents per minute. This allows the system to identify patterns and anomalies that human analysts might miss or take weeks to determine. The AI focuses on events proven to move share prices, relaying this information to users through alerts.Einhorn explains:"What we can do with these AI search agents is monitor everything happening around these stocks, looking specifically for events proven to move share prices. And when it finds them, it relays them as an alert. So, a user doesn't have to read all these documents, search through them, or do stock screeners.The AI's Data Sources and Analysis ProcessUnlike many AI systems that rely on news articles or third-party analyses, Level Fields AI primarily sources its information directly from company releases and filings. This approach helps to eliminate bias and focuses on factual events rather than opinions or speculation.Einhorn explains the advantage of this approach:"Largely, it's directly from the companies themselves to the reports they put out, the press releases they put out, and, in some cases, interviews that they've done... And that prevents many problems. Right. And prevents some of the opinion izing that tends to happen in news articles or the bias of an article to focus on one particular detail or the press release and not others deemed less important."The system's analysis goes beyond simple word recognition. It uses context-based algorithms to understand the true meaning of the information it processes. Einhorn provides an example:"Let's say you had two articles before you, and you're an AI system. One article from National Geographic magazine talks about bluebirds. Right. And what they do and how successful they are. The other is a Bloomberg article about Bluebird Bio, a publicly traded biotech company. How does the AI know that one is a bluebird that flies in the sky and the other is the Bluebird bio? And the answer is the context around it."Moreover, the AI correlates events with real-time stock price data to identify patterns:"We use that by extracting your real-time data from the exchanges and pairing it with the event. So there's a correlation component to that: this win rate in this correlation component, 95% of the time, this event will drive up the share price or drive down."AI Agents: A Unique ApproachEinhorn's team took a different approach to developing their AI system compared to many others in the field. He explains:"We built our own language model and libraries and ontologies. Linguistically, we have different linguistic algorithms that we employ. I think there's several hundred. Started the process in 2019 and we built it very focused on finance."Initially, they considered using open-source AI tools, but found them inadequate for their needs:"And in the beginning, we thought, oh, maybe we could just get kind of a open source AI tool and train that and save and development. But they were so basic, it just wasn't going to work for our needs, because it required so much training and the stuff that it was doing in the open source, it was really just kind of tokenizing the words. Right, saying, this is a noun, this is an adjective that's a proper noun. And that was a good starting point. But it was pretty elementary."Instead of building on these basic tools, they decided to start from scratch:"And so rather than rely on that and train some analysis tool how to do it properly and build on top of it, we just scrapped it and did it from scratch and said, what?"Their focus was on developing context-based algorithms that could understand the nuances of financial language. He shares how an AI LLM might search for bluebird, finding the actual bird and also Bluebird Bio, a biotech company. It must no the difference, and that happens by applying context:"We spent more time kind of developing like contextual based cues and algorithms that would tell the AI hey, you're in a finance publication now, we're not talking about animals."The team also prioritized extracting factual information over opinion or hyperbole:"How do you extract hyperbole out, we wouldn't allow the system to kind of look at some of that type of language. Right? It was just looking at straight events. And so our goal was no, no bias, no bull, right. Just look at events that happened or didn't happen or it might happen in the future."Einhorn emphasizes that their system is constantly learning and adapting:"The system was learning on a regular basis. As language is shifting over time, you have to adjust your algorithms and the language that you're looking for. And this include shorthand, social media acronyms that just become common speak, and being able to put those together. So it's it's a constant process of improving."This unique approach to AI development has allowed Level Fields to create a system specifically tailored to the needs of financial analysis, setting it apart from more general-purpose AI tools.Uncovering Obscure AI StocksWhile much of the AI investment hype has focused on big tech companies, Level Fields AI has identified some surprising players in the AI space. These include:* An Oregon utility company powering data centers* A company developing AI-driven military drones* A well-known company not typically associated with cutting-edge AIOne example Einhorn provided was Portland General Electric (ticker: POR). This utility company has been performing well despite the challenging environment for utilities:"We got a couple utility companies that came across that were increasing their dividends substantially and it caused us to look at one of them, which is a Portland based utility company. I think the ticker is POR; why is this utility company doing so well when utilities are not supposed to be doing well in a high interest-rate environment? Well, the majority of the servers on the West Coast are based in Portland, and this is the utility company that is supplying electricity to those data centers.Another exciting find was AeroVironment (ticker: AVAV), a $5 billion drone manufacturer:"AVAV is a stock that we've tracked quite often. It's it's a $5 billion drone manufacturer. They made the switchblade drone the most famous drone in the Ukraine war. They can shoot it like a mortar, and then it flies to the target and has a precision lock on the target. It's using AI to navigate on its own along the way."The Human Element in AI-Driven InvestingWhile the AI system provides robust analysis and alerts, Level Fields offers a human-curated service for investors who want more guidance. This service, priced at $167 per month, provides subscribers with carefully selected trade recommendations based on the AI's findings.Einhorn explains the rationale behind this offering:"We actually were asked by a number of users. They were subscribers for level one, which is a couple hundred dollars a year. And they said this is great, but if you could tell me what to buy, where to buy, and where to sell, that would be even more helpful... If somebody follows the alerts, cherry-pick the ones coming through the platform, and then tell me the best trade setup, that will save me many hours. And I'm willing to pay $167 monthly to save myself a few hours and make much more money."The Future of AI in FinanceAs AI evolves and becomes more integrated into various industries, its impact on the financial sector will likely grow. E

07-19
21:42

The AI Bubble Burst Wakes Up....

How can I help a small company with 20 employees be more efficient and productive? It should be obvious, right?But that's what the CEO asked me. And the only use cases he could really cite were some tricks and things done on social media. Is that our best use case of AI?Tik Tok VideoIt was crazy. And part of it is driven by this industry and why it needs to change. And it's changing right now.A Brief History of AI HypeLet's look at the brief history of AI hype.Back in 2015, Elon Musk predicted driverless cars within a few years. Then 2019 and 2021 came and went, and it hasn’t happened yet. We're in 2024 now. It's coming, but not as fast as predicted.Geoffrey Hinton, the godfather of AI, said radiologists would be gone in a few years, and that was in 2017. Last I looked, they still have jobs.Mark Zuckerberg pitched the Metaverse for years and, according to estimates, lost $45 billion because it didn't work. In fact, listen to Satya Nadella, the CEO of Microsoft, talk about this:"And when we talk about the metaverse, we're describing both a new platform and a new application type, similar to how we talked about the web and websites in the early 90s."What's funny is if you listen to his pitches for AI, it's like they took out the word "metaverse" and put in "AI". This is where we're at.And why people aren't adopting it and why revenue is slow.Combine that with endless obsessions with AGI and following Ray Kurzweil's predictions. Just because Kurzweil got it right once doesn't mean he's going to get it right again.The AI Bubble BurstThis future prediction leads into the AI Optimists episode: "The AI Bubble Burst Wakes Up. It's the end of the AI world as we know it. What's next post-hype?"That's what I want to show you - the real value, because that crash is going to help AI find its true North. And this is not just me saying this.Sources like Sequoia and the search engine Baidu's CEO are pointing to the same problem.The simple and profitable way for big tech companies like Microsoft, Meta, and Google to hype the future raises their stock price. It's easier while they're building it.But right now, between hallucinations, untrustworthy tools, lack of privacy, and the AGI future Terminator fear horror story, adoption is slow. People don't trust it. They're afraid of it. We're telling them it will replace their jobs.Even the Gartner hype cycle research shows this is probably at the tipping point. And while we don't know for sure, numerous sources are pointing to this, from Sequoia to the CEO of Baidu in China, finding the same problem: lots of hype, not a lot of performance."This crash that's coming up, from my experience in the dot-com crash, is going to be very different, but it will clear out the hype and focus on businesses. That's when good things happen, and adoption will likely increase in practical, not future-predicting ways, and business models will arise."Fake it Till You Profit - The Three Major ProblemsWhat we're going to cover in this episode are solutions to the three "fake it till you profit" problems the AI industry faces, which are undermining the revenue it needs to generate. Problem #1: Low RevenuesThe problem is that there are not enough people spending money on AI. Now, we all know that ChatGPT is making a lot of money.Their revenues increased from $1.6 billion in late 2023 to $3.4 billion. Though the problem there is most of that revenue is from subscriptions, it's not going 10x.Their valuation is $80 billion based on enterprise use, which was only $200 million. So, they're the dominant players, far outpacing other startups.But most of the others are doing $100 million in revenue or less. And remember, these are billion-dollar valuations, trillion-dollar dreams.The big tech companies are profiting, but the rest of us are sort of sitting on the sidelines listening to a lot of hype.The revenue to spending mismatch is crazy. In 2023, AI firms spent $50 billion on Nvidia chips. We call that picks and shovels, but they only generated $3 billion in revenue. It's a mismatch that is really undermining and is going to bring the crash.According to many sources, the bubble really appears to be closer than expected.You compound that with the current content model to feed ChatGPT, which was all free. They scraped it without asking for permission. But all these lawsuits from the New York Times and Getty Images and others will likely bring a licensing model that's going to make it much more expensive.And the last part of low revenues is the electrical costs are crazy. They're talking about spending trillions to replace the grid in the US, which is only barely able to keep up with our consumer usage.We can't even have electric vehicles because we can't support them. So, if you think this is going to happen really quick, do the math.What's being hyped? Does it make sense? And the best thing that could happen in this market is for it to go through its crash. Problem #2: High CostsIt's going to take an estimated $600 billion for these companies to break even. David Cahn, an analyst with Sequoia Capital, believes that AI companies will have to earn about $600 billion per year to pay for the infrastructure, such as data centers and revenue that are in single-digit billions. This is a huge gap that doesn't even make sense.And in fact, the founder and CEO of Baidu, the biggest search engine in China, says that their country has too many large language models and too few practical applications.That's one of the keys to problem number three.Problem #3: Limited Business ModelsThe only business model is large language models. And the picks and shovels, like Nvidia or a company like Astera Labs at a much smaller level.And we all talk about AGI and its fear of the future making us scared, which prevents and slows adoption. But what about predicting a business model?What happens when this thing grows, and what does that look like?So regular businesses like that 20-person company can take advantage as they should.Even more confusing to people is what's called AI Washing, the practice of using artificial intelligence to create a false or distorted impression of a company's performance or potential.For example, when Reddit went public, they used their licensing deal for $60 million a year with Google to almost validate it, and they've done very, very well. But they're not an AI company.And there are others. Amazon used AI back in 2016 and said that you could walk into one of their stores, pick up your groceries, and walk out without ever having to use a cash register, all done by AI. And now, eight years later, we finally find out the truth:SOURCE: Coldfusion research "Amazon's so-called AI technology was actually powered by a thousand people in India watching and labeling videos to ensure accurate checkouts."It's not AI, it's a thousand people in India tracking you and entering your orders. That's what's called AI washing.And a lot of the companies and startups really sort of use ChatGPT, and they call it a wrapper, an API wrapper, so that they can be able to say, "Hey, we're AI," but they're just off other technology.That's not that big a difference between other companies. This is where you get the confusion.The Solution: Practical Problem-SolvingLet's talk about the solution that you can put into and why this crash is not only going to be so good, but we'll get rid of the companies that aren't good.And we'll also start creating business models that are revenue-driven. And that will really fuel the growth and increase of jobs.In fact, Georg Zoeller, who was a business engineering director at Meta, says: “Despite being in the largest private investment in technology in the history of mankind, companies have cut jobs.”The job loss is real. I've seen it in marketing, customer support, and if we want to create jobs, we've got to get rid of this hype and hot air, which is what markets naturally do.He says that job loss is bigger than the dot-com bust, and while I don't know if that's true, what's interesting is we have to go post-hype.The solution is practical problem-solving based on specific use cases. And listen to Mark Zuckerberg, who makes a great point recently:SOURCE: Kallaway channel"Tech industry kind of talks about building this one true AI. It's like it's almost as if they kind of think they're creating God or something. And it's like, it's just... that's not what we're doing."This industry has a bit of a God complex, as he puts it. They are trying to create these things that make you go "Wow!" and go crazy, but they don't really deliver solutions for you as the user, as the business, or as a startup.You've got to watch what happens when this crash, which is coming from all sorts of sources, Sequoia to Baidu, happens because this is where the cleansing is out.And take it from me, having gone through the dot-com crash, my revenue tripled when all the hot air and hype went out, and the people remaining who are revenue-driven, business-driven grew.We're now going from large language models to what are called SLMs - small language models. Focus. A practical example right now is a company called LevelFields. LevelFields uses AI to make it easier for normal investors to compete in the stock market against big hedge funds and institutional investors with lots of money, lots of resources, doing all sorts of things.That makes it unfair if the user was left alone. Their stock options and trading tools give them the information that AI assimilates in seconds rather than hours of poking through different links.Another example - again, what's the problem? What's the focused solution?Sixfold uses AI in the insurance industry, making efficient, customized solutions for diverse insurance needs. And that's what you need to do post-hype.Customizing AI for Your Solution - Getting out of the AI BubbleNext up is to customize your solution. The biggest mistake even that small company with 20 employees makes is they think they can buy the t

07-12
16:02

Recommend Channels