Discover
Tech, Policy, and our Lives
Tech, Policy, and our Lives
Author: Alexander Titus
Subscribed: 0Played: 2Subscribe
Share
© Alexander Titus
Description
Tech, Policy, and our Lives, brought to you by The Connected Ideas Project is a podcast about the co-evolution of emerging tech and public policy, with a particular love for AI and biotech, but certainly not limited to just those two. The podcast is created by Alexander Titus, Founder of In Vivo Group and The Connected Ideas Project, who has spent his career weaving between industry, academia, and public service. Our hosts are two AI-generated moderators (and occasionally human-generated humans), and we're leveraging the very technology we're exploring to explore it. This podcast is about the people, the tech, and ultimately, the public policy that shapes all of our lives.
www.connectedideasproject.com
www.connectedideasproject.com
60 Episodes
Reverse
A few months ago, when we first started talking about the Science of Responsible Innovation at The Connected Ideas Project, I kept coming back to a simple question:How do we know?How do we know whether a technology is actually as powerful—or as dangerous—as we imagine?How do we know whether our fears are grounded in evidence or in extrapolation?How do we know whether policy is steering something real, or something hypothetical?It’s one thing to run a model through an in silico benchmark and watch it ace a virology exam. It’s another thing entirely to put a pipette in a novice’s hand and see what happens in a real lab.That’s why the recent paper, “Measuring Mid-2025 LLM-Assistance on Novice Performance in Biology,” feels so important.Not because it proves that AI is safe.Not because it proves that AI is dangerous.But because it does something rarer and more valuable: it measures.And in doing so, it gives us a template for what responsible-by-design evaluation can look like in the age of frontier AI and synthetic biology.The podcast audio was AI-generated using Google’s NotebookLM.The Gap Between the Benchmark and the BenchFor the last several years, large language models have been climbing biological benchmarks at an astonishing rate. Protocol design. Sequence interpretation. Troubleshooting. Literature synthesis. In some cases, outperforming domain experts on structured tests.On paper, that looks like capability. And capability, when it intersects with viral reverse genetics or synthetic biology, looks like risk.But as I’ve discussed in recent work on Violet Teaming—particularly in “The Promise and Peril of Artificial Intelligence — ‘Violet Teaming’ Offers a Balanced Path Forward” —capability is not impact. And risk is not hypothetical power alone. It’s what happens when humans, institutions, and technical systems interact in the real world.The authors of this new study understood that.So instead of running another benchmark, they ran a randomized controlled trial. In a real BSL-2 laboratory. With 153 novices. Over eight weeks. Across five hands-on biological tasks modeling a viral reverse genetics workflow.Not a chatbot demo.Not a thought experiment.A physical lab.That matters.Because biology isn’t just text. It’s tacit knowledge. It’s sterile technique. It’s muscle memory and timing and pattern recognition. It’s knowing when a cell culture “looks off.” It’s knowing that the protocol you copied from a paper assumes three unstated steps.Benchmarks rarely capture that.The study did.And the results are, in a word, humbling.What the Study Actually FoundThe primary question was straightforward: does access to mid-2025 frontier LLMs significantly increase a novice’s ability to complete a sequence of tasks modeling viral reverse genetics?The answer, in binary terms, was no.Completion of the core workflow was low in both groups—LLM-assisted and internet-only—and there was no statistically significant difference in full workflow completion.If you stop there, you might conclude: the models don’t matter.But that would be the wrong lesson.Because the study also found something more subtle—and arguably more important.Across individual tasks, LLM-assisted participants were more likely to progress further through procedural steps. In cell culture, they completed tasks faster and with fewer attempts. Bayesian modeling suggested a modest uplift—on the order of ~1.4× for a “typical” reverse genetics task—though with uncertainty bounds that rightly temper interpretation.In other words: not a revolution.But not nothing.And this is where responsible innovation becomes interesting.Why This Is Violet Teaming in PracticeWhen Adam Russell and I first articulated the idea of Violet Teaming, we described it as the integration of red teaming (adversarial probing), blue teaming (defensive hardening), and ethical design into a proactive, sociotechnical framework .Most conversations about AI and biosecurity oscillate between red and blue:Red: “What if this model can design a pathogen?”Blue: “Let’s add filters, classifiers, restrictions.”What this study does is different.It asks: what is the real-world uplift? How much does LLM assistance actually change novice capability in a physical lab? Not in theory. Not in speculation. In practice.That’s violet.Because it embeds evaluation into the design and governance process itself.Instead of arguing over worst-case extrapolations, we now have empirical data about:* Completion rates* Time-to-task* Procedural progression* Human–AI interaction patterns* Elicitation failures* Usage intensity and its (lack of) correlation with successThat last point is particularly striking. Participants who used LLMs more did not necessarily perform better. There was no clean dose–response curve.That’s not a trivial observation.It tells us that raw access is not the same as effective amplification. It suggests that prompting skill, interface design, cognitive scaffolding, and user expertise mediate uplift.And that means risk is not simply a function of model weights. It’s a function of the entire sociotechnical system.That’s violet territory.The Most Important Finding: The GapTo me, the most important result is the documented gap between in silico benchmark performance and physical-world utility.This is not an indictment of benchmarks. They serve a purpose. But they are not reality.A model can generate a flawless text protocol for molecular cloning and still fail to help a novice identify the correct reagents from a messy inventory spreadsheet. It can hallucinate a DNA sequence that looks plausible but is wrong in a way a novice cannot detect. It can provide text-based instruction where video-based tacit demonstration might matter more.In the study, YouTube was often rated as more helpful than any individual LLM.That’s not because YouTube is smarter. It’s because biology is embodied.This is precisely the kind of nuance that responsible innovation requires.Without physical-world validation, we risk building policy on top of performance claims that don’t map cleanly onto human capability.This study doesn’t close the gap. It reveals it.And revelation is the first step toward responsibility.Responsible-by-Design Requires QuantificationOne of the themes we’ve explored in the Science of Responsible Innovation is that values without metrics are aspirations. Metrics without values are optimization problems.We need both.This study provides something we’ve been missing: a quantifiable baseline for novice uplift in a dual-use biological workflow.Not a theoretical upper bound.Not a catastrophic scenario.An empirical distribution.The Bayesian estimates even put a 95% credible upper bound around uplift (~2.6×), which matters enormously for policy calibration.If you’re designing guardrails, export controls, compute thresholds, or deployment policies, you need to know: are we talking about a 10× amplification? A 2× amplification? Or something closer to noise?This paper suggests modest uplift under the conditions studied.That doesn’t eliminate risk. It contextualizes it.And contextualization is the heart of responsible governance.Where the Study Can Go NextNow, let’s be honest.As strong as this study is, it is not the final word. It’s the first serious step.And if we want this to become an evolving framework for violet teaming and responsible-by-design evaluation, we need to iterate.Here are several ways I believe the next generation of this work could build on this foundation.1. Extend the Time HorizonEight weeks is meaningful. But complex biological workflows often require longer timeframes for skill acquisition.Low completion rates may reflect not just capability limits, but time constraints. A longer intervention period could reveal whether modest early procedural uplift compounds into higher eventual completion.Responsible innovation must account for trajectory, not just snapshot.2. Integrate End-to-End WorkflowThe tasks were decoupled into discrete components. That’s methodologically clean, but real-world risk emerges from integration.A future iteration could test whether novices can string together multiple steps into a coherent, self-directed project—while still maintaining appropriate biosafety controls.3. Compare Model Generations LongitudinallyThe models tested were mid-2025 frontier systems. Biology-specific models are already emerging.A longitudinal design—repeating the same protocol annually—would allow us to empirically track uplift curves over time.That would be invaluable for macrostrategy. Instead of forecasting speculative capability growth, we could measure it.4. Test Interface ScaffoldingThe study hints that elicitation constraints matter. Novices may not know how to ask the right questions.What happens if we add structured prompting interfaces? Visual overlays? Augmented reality guidance? Automated error-checking layers?Risk may scale not just with model intelligence, but with integration depth.5. Incorporate Expert–Novice ComparisonsHow much of the gap is due to user expertise? Running parallel cohorts—novices and trained biologists—could quantify differential uplift.That matters for both workforce development and biosecurity risk modeling.6. Expand Metrics Beyond Binary OutcomesThe procedural step analysis in this study was a brilliant move. Binary success/failure hides important dynamics.Future designs could incorporate:* Error rates* Near-miss events* Quality metrics* Safety deviations* Confidence calibrationResponsible innovation isn’t just about “can they finish?” It’s about “how do they behave along the way?”The Human Story Beneath the StatisticsI keep thinking about the participants in that lab.Undergraduates. Non-biologists. Humanities majors. Standing in a BSL-2 facility, trying to figure out how to culture HEK293T cells without a mentor leaning over their shoulder.Some of them prompting an LLM twenty times a day.Some uploading images.Some getting frustrated when the model confidently suggests the wron
Modern governance is haunted by an unrealistic expectation: that legitimacy requires agreement.We have come to believe—implicitly, often unconsciously—that if societies cannot reach consensus on the risks and benefits of a technology, then governance has failed. That disagreement itself is evidence of irresponsibility. That the absence of unanimity delegitimizes action.In an era of slow-moving institutions and narrow technologies, this belief was merely inconvenient. In an era of fast-moving, general-purpose systems, it is paralyzing.If the Science of Responsible Innovation is to function in the real world, it must confront a hard truth: consensus is no longer a prerequisite for legitimacy—and insisting on it may be the most irresponsible posture of all.The podcast audio was AI-generated using Google’s NotebookLM.The Myth of ConsensusConsensus feels comforting. It suggests shared values, collective understanding, and moral clarity. It promises that decisions are not imposed, but agreed upon.But consensus has always been rarer than we like to admit.Most consequential decisions in modern history—from industrialization to nuclear power to the internet—were made amid deep disagreement. What sustained legitimacy was not unanimity, but the presence of institutions capable of acting, learning, and correcting course in public view.The expectation of consensus is a relatively recent artifact, amplified by social media, participatory rhetoric, and the moralization of policy debates. Disagreement is now treated not as a feature of pluralistic societies, but as a governance failure.This framing collapses under technological complexity.Why Consensus Breaks at the FrontierEmerging technologies resist consensus for structural reasons.They involve uncertain evidence, asymmetric risks, and uneven distributions of benefit and harm. They compress timelines. They force tradeoffs between present and future goods. They challenge existing power structures.Under these conditions, reasonable people will disagree—often profoundly.Expecting consensus in such contexts is not aspirational. It is evasive. It defers responsibility by setting an unattainable standard.Legitimacy as a Property of ProcessIf legitimacy does not come from agreement, where does it come from?Legitimacy emerges from process, not outcome.A decision can be legitimate even when controversial if the process by which it was made is perceived as fair, transparent, and accountable. Conversely, a unanimous decision reached through opaque or exclusionary means can be profoundly illegitimate.This distinction is foundational to democratic governance, but it has been under-applied to technology.Responsible-by-design reframes legitimacy as something that is earned continuously, not bestowed once.The Elements of Legitimate DisagreementFor disagreement to coexist with legitimacy, several conditions must hold.VisibilityDisagreement must be visible, not suppressed.Legitimacy erodes when dissent is hidden or dismissed. Making disagreement explicit—documenting assumptions, minority views, and unresolved tensions—signals seriousness rather than weakness.RepresentationThose affected by a technology must have pathways to be heard, even if their views do not prevail.Legitimacy does not require that every perspective determine the outcome. It requires that perspectives be considered in good faith.AccountabilityDecision-makers must be identifiable and answerable.Anonymous authority breeds mistrust. Legitimate governance requires clear ownership of decisions, along with mechanisms for challenge and review.RevisabilityPerhaps most critically, decisions must be revisable.When evidence changes, governance must change with it. The promise of revisability—backed by real authority to act—allows societies to tolerate disagreement without freezing.Consensus as a Hidden Source of PowerCalls for consensus often sound neutral. They are not.In practice, consensus requirements advantage those with veto power: incumbents, well-resourced actors, and those comfortable with the status quo. When unanimity is required, the default outcome is inaction.This dynamic is particularly dangerous in domains where delay carries real harm—unmet medical needs, climate risk, and/or infrastructure fragility.Insisting on consensus can therefore function as a form of quiet domination, disguised as caution.Legitimacy in the Absence of CertaintyAt the frontier of technology, uncertainty is unavoidable.Evidence will be incomplete. Models will be wrong. Early decisions will need correction.Legitimacy does not come from pretending otherwise. It comes from acknowledging uncertainty explicitly and designing governance that can absorb it.This is where governance latency becomes decisive. The faster institutions can detect harm, interpret signals, and act, the less they must rely on consensus as a substitute for control.Responsiveness replaces unanimity.The Relationship Between Legitimacy and ProportionalityLegitimacy without consensus depends on proportionality.When governance distinguishes between green, orange, and red zones, disagreement becomes more tractable. Actors may still contest classification, but they are no longer arguing in absolutes.Proportionality creates space for partial agreement: agreement on process even when outcomes differ; agreement on oversight even when deployment is contested.This is how pluralistic societies move forward without pretending to agree.What Legitimate Governance Looks Like in PracticeIn a responsible-by-design system, legitimacy is built through concrete practices:* Clear articulation of decision criteria* Documentation of dissent and uncertainty* Defined authority to act and to revise* Transparent monitoring and reporting* Mechanisms for escalation and redressNone of these require consensus. All of them require competence.The Discipline AheadThe future of technology governance will not be decided by who wins the argument.It will be decided by whether institutions can earn trust amid disagreement—by acting visibly, correcting quickly, and governing proportionally.At the frontier of technology, humanity is the experiment.Legitimacy without consensus is how we keep that experiment democratic, adaptive, and humane.That is not a compromise.It is the only path forward.-Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
If restoring proportionality were simply a matter of classification, the problem would already be solved.Green zone. Orange zone. Red zone.The framework is intuitive. The logic is sound. And yet, in practice, the hardest part of proportional governance is not designing the zones—it is agreeing on where a technology belongs.This is the uncomfortable truth at the center of responsible-by-design: classification is not a technical exercise alone. It is a social, institutional, and political one.Every serious disagreement about emerging technology eventually collapses into a fight over zone placement. Not because people are irrational, but because zone assignment encodes values, incentives, and risk tolerance—often implicitly.Understanding why agreement is so difficult is the next step in building a Science of Responsible Innovation that actually works.The podcast audio was AI-generated using Google’s NotebookLM.The Illusion of Objective ClassificationThere is a natural temptation to believe that enough data, enough modeling, or enough expertise will produce a single “correct” zone assignment.It will not.Risk is not an intrinsic property of technology. It is a relationship between a system and the world it enters. Severity depends on context. Reversibility depends on infrastructure. Distribution depends on power.The same technology can be green in one setting and orange—or red—in another.An AI model used for drug target prioritization inside a regulated pharmaceutical pipeline may be low risk and highly reversible. The same model released openly, paired with automated synthesis and weak oversight, may move quickly toward red.Zone assignment is therefore conditional, not absolute.Disagreement does not indicate failure of reasoning. It indicates that different assumptions are being applied—often without being named.Why Reasonable People DisagreeMost zone disputes are not about facts. They are about frames.Different Reference HarmsSome actors anchor on historical harm. Others anchor on theoretical maximum harm. Both are rational.Clinicians and researchers tend to focus on harm already occurring—patients dying today, diseases untreated, systems failing in real time. For them, delay carries moral weight.Security professionals and bioethicists often focus on tail risk—low-probability, high-severity outcomes whose consequences are irreversible. For them, even small probabilities demand attention.These are not incompatible perspectives. But without explicit proportional reasoning, they appear irreconcilable.Different Time HorizonsShort-term and long-term risks do not feel the same, even when they are commensurate.Immediate harms are vivid and legible. Long-term harms are abstract and uncertain. People discount the future differently—not out of malice, but because institutions reward different time scales.Zone disputes often mask disagreements about when harm matters, not whether it matters.Different Power PositionsZone classification looks different depending on where one sits in the system.Those who bear downside risk—patients, workers, communities—tend to be more cautious. Those who capture upside—investors, developers, states—tend to emphasize opportunity.Neither position is illegitimate. But pretending that zone assignment is neutral obscures these dynamics.The Role of UncertaintyDisagreement intensifies under uncertainty.Early in a technology’s lifecycle, data is sparse, use cases are speculative, and second-order effects are poorly understood. This ambiguity invites projection.Optimists extrapolate potential benefit. Pessimists extrapolate potential harm. Both are filling gaps in knowledge with values.This is not a flaw. It is inevitable.The failure occurs when uncertainty is treated as a reason for absolutism rather than for adaptive governance.When Zone Disputes Become PathologicalHealthy disagreement is not the problem. Pathology emerges when disagreement hardens into a stalemate or theater.This happens in three ways.First, zone inflation. Technologies are rhetorically pushed toward red because red confers moral authority. If everything is existential, restraint becomes the only defensible posture.Second, zone denial. Risks are minimized or dismissed to keep technologies green, often until failure forces reclassification.Third, zone laundering. Systems are framed narrowly to avoid scrutiny—presented as green tools while embedded in orange or red pipelines.All three erode trust.Who Should Decide the Zone?If zone assignment is not purely technical, who should decide?The answer is uncomfortable but unavoidable: no single actor can.Proportional governance requires pluralistic classification.This means:* Technical experts to assess capability and failure modes* Domain experts to understand real-world impact* Governance bodies to weigh systemic risk* Affected communities to articulate lived consequencesNot consensus. Legitimacy.The goal is not unanimity, but a process that surfaces assumptions, documents disagreement, and allows decisions to evolve with evidence.Making Disagreement ProductiveA Science of Responsible Innovation does not eliminate disagreement. It structures it.Productive zone classification requires:* Explicit articulation of assumptions* Clear criteria for severity, reversibility, and distribution* Mechanisms for revisiting decisions as systems scale* Authority to move technologies between zonesMost importantly, it requires humility—the recognition that initial classifications are provisional.Zones as Governance ConversationsZones should be understood less as labels and more as conversations.A technology placed in the orange zone is not “unsafe.” It is under active stewardship. A technology placed in the red zone is not “evil.” It is constrained because the cost of failure is too high.Disagreement over zones is not a sign that the framework has failed. It is evidence that it is being used.The Discipline AheadThe hardest work in responsible-by-design is not building the tools. It is building institutions capable of judgment under uncertainty.That requires tolerating disagreement without collapsing into paralysis or absolutism. It requires processes that can hold multiple perspectives without pretending they are equivalent.At the frontier of technology, humanity is the experiment.Deciding the zone is how we practice responsibility—not by eliminating conflict, but by governing through it.That, more than any classification scheme, is the true test of proportionality.-Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
Once proportionality collapses, every technology looks the same.That is the hidden failure mode at the heart of today’s technology debates. When we lose the ability to distinguish between different kinds of risk—different magnitudes of harm, different degrees of reversibility, different distributions of benefit—governance flattens. Everything becomes either forbidden or inevitable. Caution turns into paralysis. Ambition turns into defiance.The Science of Responsible Innovation exists to restore that lost middle. And one of its most practical contributions is deceptively simple: not all technologies belong in the same risk category.To govern proportionally, we must sort technologies not by hype or fear, but by zone.The podcast audio was AI-generated using Google’s NotebookLM.Why Zones MatterModern governance systems are bad at nuance but excellent at binaries.Approve or deny. Regulate or deregulate. Open or ban.These binary instincts worked reasonably well when technologies were slow-moving, localized, and modular. They fail catastrophically in a world of general-purpose systems, rapid scaling, and cross-domain spillovers.Zones are an attempt to reintroduce gradient into a system addicted to absolutes.They do not ask whether a technology is “good” or “bad.” They ask:* How severe could the harm be?* How reversible are the consequences?* How tightly coupled is the system?* How widely distributed is the capability?From these dimensions emerge three governance zones: Green, Orange, and Red.The Green Zone: Technologies That Should Move FastGreen zone technologies are those where failures are low severity, high reversibility, and well-contained.Mistakes are recoverable. Harms are localized. Feedback loops are short. Governance latency can be tolerated because consequences are manageable.Many software tools live here. So do early-stage research aids, decision-support systems, and automation that augments human judgment rather than replaces it.In AI and biology, green zone examples often include:* AI systems used for hypothesis generation or prioritization* In silico simulations with no direct actuation* Laboratory automation tools operating under existing biosafety regimes* Models that require expert interpretation and cannot execute autonomouslyThe governance posture for green zone technologies should emphasize speed, experimentation, and learning.Oversight exists, but it is lightweight. Monitoring focuses on performance and reliability rather than existential risk. Failures are treated as signals, not scandals.Over-governing the green zone is not caution—it is waste. It slows beneficial innovation without meaningfully increasing safety.The Orange Zone: Technologies That Demand Active GovernanceMost consequential technologies live in the orange zone.Orange zone systems are characterized by moderate to high potential harm, partial reversibility, and non-trivial coupling to broader systems. They are powerful enough to matter but constrained enough to manage—if governance keeps pace.This is where proportionality matters most.Examples include:* AI systems that influence medical, financial, or infrastructure decisions* AI-enabled biological discovery paired with controlled synthesis* Autonomous systems operating within bounded environments* Dual-use tools with legitimate applications and misuse potentialOrange zone technologies require continuous oversight, not blanket restriction.Governance here focuses on:* Instrumentation and auditability* Staged deployment and access controls* Human-in-the-loop or human-on-the-loop supervision* Clear escalation and rollback pathwaysThe orange zone is uncomfortable because it resists absolutes. It demands judgment. It requires institutions capable of learning in real time.Most governance failures occur here—not because risk is unmanageable, but because it is misclassified.The Red Zone: Technologies That Demand PrecautionRed zone technologies are those where failures are high severity, low reversibility, and systemically coupled.Once released, harm cannot easily be undone. Effects may propagate across populations, ecosystems, or geopolitical boundaries. Containment is uncertain. Attribution may be impossible.Examples include:* Capabilities that enable large-scale biological harm* Systems that can autonomously design and deploy irreversible interventions* Technologies that concentrate overwhelming power with minimal accountabilityIn the red zone, speed is not the objective. Containment is.Governance here justifiably includes:* Strict access controls* Non-proliferation norms* International coordination* Formal review and approval processesRed zone governance is not anti-innovation. It is pro-survivability.The mistake is not that red zones exist. The mistake is pretending everything belongs in one.What Happens When Zones CollapseWhen proportionality collapses, zones collapse with it.Green technologies are treated as red, choking off experimentation. Orange technologies are forced into binary decisions they cannot survive. Red technologies are either demonized theatrically or pursued covertly.The result is a governance environment that is simultaneously too strict and too weak.This is how we end up with innovation flight, underground experimentation, and fragile oversight—exactly the opposite of what responsible innovation demands.Zones Are Dynamic, Not FixedA critical feature of proportional governance is recognizing that zones are not permanent.Technologies migrate.A green zone research tool may become orange as it scales. An orange zone system may become red as autonomy increases or coupling tightens. Conversely, red zone risks may move toward orange as containment, reversibility, or institutional capacity improves.This is why responsible-by-design emphasizes continuous reassessment.Classification is not a one-time decision. It is an ongoing process informed by evidence, monitoring, and lived experience.Governance Intensity Should Match the ZoneThe central principle is simple: governance intensity should scale with risk, not with rhetoric.Green zone technologies need permissionless innovation.Orange zone technologies need active stewardship.Red zone technologies need precautionary constraint.Anything else is misalignment.Why Zones Restore ProportionalityZones do not eliminate disagreement. They make disagreement productive.Instead of arguing whether a technology is good or evil, stakeholders can argue about classification, evidence, and movement between zones. That is a solvable problem.Zones reintroduce judgment without moral collapse. They allow societies to move fast where they can, slow where they must, and adapt as conditions change.The Work AheadThe future will not be governed by a single rulebook. It will be governed by systems that can distinguish between different kinds of risk in real time.Green, orange, and red zones are not bureaucratic categories. They are cognitive tools. They are how proportionality becomes operational.At the frontier of technology, humanity is the experiment.Zones are how we decide which experiments to run quickly, which to supervise carefully, and which to approach with extreme caution.That judgment—not absolutism—is the essence of responsible innovation.-Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
Every generation of complex technology eventually collides with the same hard truth: it does not matter how carefully a system is designed if the institutions responsible for governing it cannot keep pace with its behavior.In the early days of software security, this truth was learned painfully. Vulnerabilities were discovered months or years after exploitation. Patches arrived slowly. Disclosure was ad hoc. The result was not merely technical failure, but systemic fragility. As systems scaled, the gap between when harm occurred and when governance responded became untenable.The modern concept of secure‑by‑design emerged as a response to this gap. But beneath the tooling, audits, and standards was a deeper insight: latency matters. The speed at which a system can be observed, understood, and corrected is just as important as the system’s nominal safety properties.Today, as we enter an era of AI‑driven, bio‑enabled, and tightly coupled socio‑technical systems, we face a broader version of the same problem.The limiting factor is no longer raw capability.It is governance latency.The podcast audio was AI-generated using Google’s NotebookLM.What Governance Latency IsGovernance latency is the time it takes for a system’s behavior in the real world to be:* Detected as meaningful or anomalous* Interpreted as requiring intervention* Acted upon through effective corrective measuresIt is not simply regulatory delay. It includes organizational awareness, institutional authority, legal mechanisms, cultural incentives, and technical affordances.In practice, governance latency is the distance between impact and response.A system with low governance latency can fail visibly, learn quickly, and adapt. A system with high governance latency accumulates hidden risk until failure becomes sudden, large‑scale, and politically explosive.Why Latency, Not Intent, Determines SafetyMuch of the public debate about emerging technologies focuses on intent. Were developers careful? Were safeguards included? Were ethical principles articulated?These questions matter—but they are insufficient.History shows that most large‑scale technological harm does not arise from malicious intent. It arises from slow feedback loops in fast systems.Financial crises are rarely caused by bad actors alone; they are caused by leverage and opacity that outpace regulatory response. Environmental disasters are rarely the result of ignorance; they emerge when monitoring, enforcement, and remediation lag behind industrial activity. Cybersecurity incidents are rarely shocking because they are novel; they are shocking because known vulnerabilities persisted too long.Governance latency is the common thread.When governance moves slower than system behavior, even well‑intentioned designs become dangerous.The Three Components of Governance LatencyTo treat governance latency as an engineering problem, it must be decomposed.Detection LatencyDetection latency is the time between a system’s harmful or anomalous behavior and the moment that behavior is recognized.In AI systems, this might include the time it takes to identify misuse, model drift, emergent capabilities, or unexpected coupling effects. In biological systems, it could be the time required to detect unintended propagation, off‑target effects, or supply‑chain misuse.High detection latency often stems from poor observability, fragmented data ownership, or incentives that discourage surfacing problems early.Interpretation LatencyInterpretation latency is the time between recognizing a signal and agreeing that it requires action.This is where ambiguity, disagreement, and institutional friction dominate. Is this anomaly noise or danger? Is it within scope or outside mandate? Who has authority to decide?Interpretation latency is often the longest component—and the least discussed. It is shaped by governance structures, legal clarity, and cultural norms around escalation and responsibility.Execution LatencyExecution latency is the time it takes to implement an effective response once a decision has been made.This includes technical rollback capability, contractual authority, regulatory power, and operational readiness. A policy without enforcement capacity does not reduce latency; it hides it.Governance Latency in the AI × Bio EraAI‑enabled biological systems compress timelines dramatically.Discovery cycles accelerate. Automation reduces friction. Capabilities propagate digitally before they materialize physically. The window between benign use and high‑impact misuse narrows.At the same time, governance remains slow.Biosafety frameworks were designed for localized laboratories, not globally networked models. AI oversight mechanisms were built for software, not systems that interface directly with physical and biological reality. Legal authority is fragmented across agencies with mismatched scopes.The result is a widening gap between capability velocity and governance velocity.When this gap grows too large, society compensates by inflating perceived risk. Catastrophic framing becomes a substitute for real‑time control. Moratoria and blanket bans become appealing because they appear to eliminate the latency problem rather than solve it.This is a predictable failure mode.Governance Latency and the Collapse of ProportionalityGovernance latency and proportionality collapse are tightly coupled.When institutions cannot respond quickly or credibly, every risk begins to look existential. When response mechanisms are blunt, nuanced distinctions lose meaning. Severity and reversibility blur together.In this context, demands for zero risk are not irrational—they are compensatory. They reflect a lack of confidence that smaller failures will be caught and corrected before becoming larger ones.Restoring proportionality therefore requires reducing governance latency.Reducing Governance Latency by DesignA responsible‑by‑design approach treats governance latency as a core system constraint.This begins with observability. Systems must be instrumented to surface meaningful signals early. Auditability, logging, and monitoring are governance tools, not mere compliance artifacts.It continues with clear authority. Decision rights must be explicit. Escalation paths must be rehearsed. Responsibility must be owned, not diffused.It requires technical reversibility. Rollback mechanisms, staged deployment, and containment boundaries reduce execution latency by design.And it depends on institutional readiness. Regulators, oversight bodies, and internal governance teams must have the expertise and mandate to act at system speed.None of this eliminates risk. It shortens the feedback loop.Governance Latency Is a Strategic VariableOrganizations often treat governance as an external constraint.In reality, governance latency is a competitive variable.Systems that can detect, interpret, and correct faster are safer—and therefore able to scale with greater legitimacy. Trust accumulates around responsiveness, not perfection.The fastest path forward is not reckless acceleration, but aligned acceleration: moving quickly within systems that can adapt when reality diverges from expectation.Why This Matters NowAs technologies converge, failures propagate across domains. AI systems affect biological systems, which affect economic systems, which affect political systems.In such an environment, delayed governance is not neutral—it is destabilizing.Reducing governance latency is therefore not merely a technical challenge. It is a societal one. It requires rethinking how authority, expertise, and accountability are structured in a world where systems evolve continuously.The Discipline AheadGovernance latency is not an argument for control over innovation. It is an argument for competent oversight.It shifts the focus from predicting every failure to responding effectively when failure occurs. It reframes responsibility as responsiveness. It aligns safety with speed rather than opposing it.At the frontier of technology, humanity is the experiment.Reducing governance latency is how we ensure that experiment remains corrigible.That is the discipline ahead.-Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
The meeting begins the way these meetings always begin: with urgency masquerading as certainty.On one side of the table—sometimes literal, sometimes virtual—are the accelerationists. They speak in timelines measured in patients, not papers. Millions of people living with rare diseases. Cancers with no second-line therapies. Pandemics that will not wait for perfect governance. To them, AI-enabled biology is not speculative power; it is applied mercy. Every month of delay is a body count.On the other side are the catastrophists, though they would reject the label. They speak in failure modes and irreversibility. Dual-use risks. Model-enabled pathogen design. Democratized capabilities that outrun containment. They are not wrong either. Biology is not software. You cannot roll back a release into the wild. Some mistakes are not recoverable.Both sides arrive armed with evidence. Both claim the moral high ground. Both accuse the other—quietly or loudly—of irresponsibility.And somewhere in the middle, the actual work stalls.This is the AI × Bio debate in 2026: not a disagreement about facts, but a collapse of proportionality.The conversation flattens almost immediately. AI-driven protein design that accelerates enzyme discovery is discussed in the same breath as hypothetical bioengineered pandemics. A foundation model used to prioritize drug targets is rhetorically adjacent to one capable of designing novel toxins. The distinction between assisted discovery and autonomous synthesis blurs. Context collapses. Everything is “potentially catastrophic.”As the risks inflate, so do the demands. Zero misuse. Perfect foresight. Absolute guarantees.The scientists in the room shift uncomfortably. They know biology does not work this way. Neither does engineering. Neither does medicine. They have lived through failure—clinical trials that didn’t work, molecules that looked promising and then didn’t translate, therapies that helped some patients and harmed others. Progress, in their world, has always been probabilistic.But probability has no place in a proportionality collapse. Only absolutes survive.So the discussion veers toward moratoria. Blanket restrictions. Calls to “pause AI in biology” until governance “catches up,” without defining what “caught up” would even mean. The proposed controls are not scoped to capabilities or contexts; they are scoped to fear.On the other side, frustration hardens into dismissal. If every advance is treated as an existential threat, why engage at all? Why submit to oversight that cannot distinguish between a wet-lab automation tool and a weapon? Why not move faster, quieter, offshore?This is how the middle disappears.What gets lost in this collapse is the ability to ask better questions.Not Is AI in biology dangerous?But which applications, under what conditions, with what controls, and with what reversibility?Not Should we stop?But Where should we slow down, where should we speed up, and who decides?Not Can we guarantee safety?But What governance posture is proportionate to this specific risk surface?In the absence of proportionality, governance becomes symbolic. Ethics reviews devolve into box-checking or veto power. Real risks—like poorly secured synthesis pipelines, informal model sharing, or fragile oversight capacity in under-resourced labs—receive less attention than hypothetical doomsday scenarios.Meanwhile, the work does not actually stop.It fragments.Large, well-capitalized institutions with legal teams and compliance departments continue quietly. Smaller labs and startups struggle under vague constraints. Informal experimentation moves to jurisdictions with weaker oversight. Open science communities fracture, unsure whether sharing is noble or negligent.The irony is brutal: a discourse obsessed with catastrophic risk ends up increasing unmanaged risk.This is the illusion of safety produced by proportionality collapse.True responsibility in AI × Bio does not come from pretending all risks are equal. It comes from distinguishing them.A model that helps identify promising CRISPR targets in rare disease research does not warrant the same governance as one capable of end-to-end pathogen design. A tool used inside a regulated pharmaceutical pipeline is not the same as one released openly with no guardrails. A reversible error in silico is not the same as an irreversible release in vivo.These distinctions matter. They are the difference between precaution and paralysis.A responsible-by-design approach to AI × Bio would not ask for impossible guarantees. It would ask for classification. It would map severity against reversibility. It would align governance intensity with systemic impact. It would invest in institutional capacity—biosafety, biosecurity, auditability—rather than performative restraint.Most importantly, it would accept the hardest truth in the room: that not acting also carries risk.Lives not saved. Diseases not treated. Pandemics not predicted early enough. Tools that could have helped, but didn’t, because the debate collapsed into absolutes.The AI × Bio debate does not need less concern. It needs better judgment.Restoring proportionality does not mean choosing sides. It means rebuilding the middle—the space where tradeoffs are named, risks are differentiated, and responsibility is practiced rather than proclaimed.Without that middle, the debate will continue to generate heat without light. With it, AI × Bio can become what it already has the potential to be: not an existential gamble, but a disciplined, human-centered extension of medicine itself.At the frontier of biology, AI is not the experiment.We are.-Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
There is a quiet failure mode running through nearly every contemporary debate about technology. It shows up in boardrooms and policy hearings, on social media and in academic journals, inside engineering teams and activist movements alike. It is not primarily a disagreement about values, nor is it a simple conflict over facts. It is something more fundamental.We have lost our sense of proportionality.In today’s technology discourse, every risk is framed as catastrophic, every acceleration is framed as reckless, and every delay is framed as negligent. The space between those extremes—the space where judgment, tradeoffs, and responsibility actually live—has collapsed.This collapse matters far more than it appears. Proportionality is not a rhetorical nicety. It is a core operating principle of engineering, governance, and strategy. When proportionality fails, decision-making fails with it. Systems oscillate between paralysis and overreach. Public trust erodes. Innovation becomes brittle, lurching forward in bursts and freezing in backlashes.If the Science of Responsible Innovation is to mean anything beyond a slogan, it must begin by restoring proportionality.The podcast audio was AI-generated using Google’s NotebookLM.What Proportionality Actually IsProportionality is the disciplined ability to reason about magnitude, likelihood, reversibility, and distribution—simultaneously.It is the habit of asking not just whether a risk exists, but:* How severe would the harm be if it materialized?* How likely is it to occur under realistic conditions?* How reversible are the consequences?* Who bears the downside, and who captures the upside?These questions are second nature to engineers. A hairline crack in a cosmetic panel is not treated the same way as a fracture in a load‑bearing beam. A memory leak is not a reactor meltdown. A degraded sensor is not total system failure. Entire fields of safety engineering exist to distinguish tolerable risk from intolerable risk—and to allocate attention, controls, and redundancy accordingly.Governance relies on the same logic. Laws differentiate between misdemeanors and felonies. Financial regulation scales with systemic importance. Insurance exists precisely because not all risks justify prevention; some are better priced, pooled, and absorbed.Proportionality is how complex societies remain functional in the presence of uncertainty.How Proportionality CollapsedThe collapse of proportionality did not happen overnight, and it did not happen for a single reason. It is the product of several reinforcing dynamics that have reshaped how modern societies perceive risk.Scale Without IntuitionModern technologies operate at scales that exceed human intuition. A single software update can affect hundreds of millions of people. A model parameter change can shift behavior across entire markets. A biological technique can propagate globally before institutions have time to respond.When scale explodes faster than our mental models, we default to worst‑case thinking. Catastrophic framing becomes a cognitive shortcut—an attempt to impose seriousness on phenomena we do not yet know how to bound.The Moralization of TradeoffsIn many domains, tradeoffs have become morally taboo.To acknowledge that saving lives today may increase future risk is treated as callous. To admit that restricting access may entrench inequality is treated as cynical. To say that some harms are acceptable relative to benefits is framed as an ethical failure.But tradeoffs do not disappear when we refuse to name them. They simply go underground, where they are made implicitly, inconsistently, and without accountability. Moral absolutism does not eliminate risk; it obscures decision-making.Incentive Compression and Outrage EconomicsModern discourse rewards absolutism. Outrage travels faster than nuance. Certainty outperforms probability. Apocalyptic warnings are amplified; calibrated risk assessments are ignored. Shock doctrine is in full force in today’s discourse. Inside organizations, incentives often mirror this dynamic. Escalation is safer than calibration. Resistance is safer than responsibility. Over time, leaders learn that the least punishable rhetorical position is the most extreme one—regardless of whether it maps to reality.Institutional FragilityAs trust in institutions erodes, so does confidence in their ability to manage risk. When regulators are perceived as slow or captured, when companies are perceived as reckless, when experts are perceived as conflicted, society compensates by inflating the perceived severity of every risk.Catastrophic framing becomes a substitute for governance capacity. Ironically, this further weakens institutions, creating a self‑reinforcing cycle.What the Collapse ProducesWhen proportionality collapses, three pathologies reliably emerge.First is risk flattening. Minor harms and existential threats are treated as morally equivalent. When everything is catastrophic, prioritization becomes impossible. Attention is spread thinly across vastly different risk surfaces, and the most serious risks often receive the least structured oversight.Second is decision paralysis. Leaders confronted with incompatible absolute claims retreat into delay, deferral, or symbolic action. Progress stalls not because risks are too high, but because they are framed as incomparable.Third is backlash cycling. Technologies deployed under inflated promises and inflated fears inevitably fail in small, normal ways. Those failures trigger overcorrection. Regulation swings from permissive to prohibitive. Public trust collapses. Legitimate benefits are lost alongside real harms.These are not abstract dynamics. They appear repeatedly in debates over artificial intelligence, biotechnology, energy systems, and digital infrastructure.The Illusion of SafetyOne of the great ironies of the collapse of proportionality is that it feels like caution.Catastrophic framing masquerades as responsibility. Demanding zero risk sounds prudent. Treating every failure as unacceptable feels ethical.In reality, this posture often produces less safety. When all risks are treated as intolerable, systems are driven underground or offshore. Informal use proliferates without oversight. Innovation concentrates in unaccountable hands. Legitimate actors retreat, leaving the field to those least inclined toward restraint.True safety does not come from eliminating risk. It comes from managing it—openly, proportionally, and adaptively.Restoring Proportionality as a Design DisciplineRestoring proportionality is not about telling people to “be reasonable.” It requires structure.A Science of Responsible Innovation restores proportionality by embedding it into design and governance processes from the outset.This begins with explicit classification. Not all systems warrant the same scrutiny. Not all capabilities demand the same controls. Severity, likelihood, reversibility, and distribution must be assessed deliberately, not rhetorically.It continues with differentiated governance. High‑severity, low‑reversibility risks justify precautionary postures and non‑proliferation norms. Moderate risks justify resilience engineering, monitoring, and rollback mechanisms. Low‑severity risks justify mitigation, insurance, and compensation.Most importantly, proportionality must be revisited continuously. As systems scale, interact, and mutate, their risk profiles change. Governance must evolve in step.Proportionality Is Not PermissionRestoring proportionality does not mean minimizing harm or dismissing legitimate concern.On the contrary, proportionality is how we take harm seriously. It forces us to allocate attention and resources where they matter most. It prevents symbolic debates from crowding out substantive ones. It enables disagreement without moral collapse.A society that cannot reason proportionally will either freeze or fracture. A society that can will move faster—and more safely—than one that cannot.Why This Is the Central Challenge of the DecadeThe technologies reshaping this decade are not marginal improvements. They are general‑purpose systems that interact with nearly every domain of human life.Without proportionality, governance becomes theater. Ethics becomes branding. Responsibility becomes a slogan.With proportionality, we can distinguish between risks that demand restraint and risks that demand acceleration. We can save lives today without ignoring tomorrow. We can move fast without pretending speed is free.At the frontier of technology, humanity is the experiment. Proportionality is how we keep that experiment from becoming reckless—or paralyzed.The collapse of proportionality is not inevitable. But restoring it will require discipline, humility, and a willingness to replace absolutism with judgment.That is the work ahead.-Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
From Secure‑by‑Design to Responsible‑by‑DesignFor the last three decades, the most mature technology organizations have learned a hard lesson: security cannot be bolted on after the fact. It must be designed in—architected, tested, audited, and continuously reinforced. Secure‑by‑design did not emerge because engineers suddenly became more ethical. It emerged because complexity, scale, and interconnectedness made reactive security failures existential.We are now at an analogous inflection point—one that extends beyond cybersecurity and into the fabric of innovation itself.The technologies defining this era—artificial intelligence, advanced biotechnology, robotics, fusion and new energy systems, and novel computing paradigms—do not merely add new capabilities. They reconfigure power, agency, labor, geopolitics, and responsibility itself. They are not tools that sit neatly inside existing institutions; they stress those institutions, bypass them, and sometimes render them obsolete.In this context, responsibility can no longer be treated as a moral aspiration or a compliance checklist. Just as security matured into an engineering discipline, responsibility must now do the same. We need a responsible‑by‑design paradigm—one that is as rigorous, operational, and measurable as secure‑by‑design ever became.This is the animating premise of The Science of Responsible Innovation.In 2026, we are past the novelty phase of generative AI and well into the deployment‑and‑consequences phase. Similar transitions are unfolding across biotechnology, robotics, and energy. The central question is no longer whether we can build these systems, but whether we can govern their creation and deployment without destabilizing the societies they are meant to serve.I want to lay out how TCIP will think, write, convene, and build in 2026—and why the next era of progress depends on treating responsibility not as an ethical afterthought, but as a scientific discipline.The podcast audio was AI-generated using Google’s NotebookLM.Part I: From Ethics to Engineering ResponsibilityWhy Ethics Alone No Longer ScalesEthics has played an essential role in shaping modern technology discourse. It gives us language for values—fairness, dignity, autonomy, beneficence. It provides moral guardrails and helps articulate what should not be done.But ethics, on its own, does not scale to systems of this magnitude.Ethical frameworks are necessarily abstract. They are interpretive rather than prescriptive. They tell us what we value, but rarely tell us how to engineer tradeoffs under real constraints: incomplete data, conflicting objectives, adversarial environments, and non‑negotiable timelines.Telling a product team to “avoid bias” does not specify which dataset to discard when representativeness conflicts with accuracy. Telling a research lab to “consider societal impact” does not explain how to weigh lives saved today against uncertain risks decades from now. In practice, ethics too often becomes a late‑stage review process—arriving after architectures are fixed and incentives locked in.At that point, ethics becomes reactive. Harm mitigation replaces harm prevention.Responsibility as a Design ConstraintA science of responsible innovation begins from a different premise: responsibility is not a moral overlay, but a design variable.To call responsibility a science is to make three specific claims:* First, responsibility can be operationalized. It can be translated into concrete requirements, metrics, and processes that shape technical and organizational decisions.* Second, responsibility can be optimized. There are better and worse ways to align technological capability with human values, institutional capacity, and societal readiness.* Third, responsibility evolves. As systems interact with the real world, their risks, benefits, and failure modes change. Responsible innovation therefore requires continuous measurement, feedback, and adaptation.This reframing moves responsibility out of philosophy seminars and into engineering reviews, product roadmaps, capital allocation decisions, and board‑level governance.Part II: The Core Pillars of a Science of Responsible Innovation1. Systems Thinking Over Feature ThinkingMost technological harm does not arise from malicious intent. It arises from systems—feedback loops, emergent behaviors, misaligned incentives, and second‑ or third‑order effects that were invisible at the point of design.A science of responsible innovation therefore begins with systems thinking. It asks:* What ecosystems will this technology enter?* What institutions will it stress, bypass, or hollow out?* What incentives will it amplify or distort?* What dependencies will it create—and how reversible are they?This requires moving beyond product‑centric thinking toward full lifecycle analysis. In AI, this means examining data provenance, labor practices, deployment contexts, and governance interfaces—not just model performance. In biotechnology, it means considering supply chains, regulatory regimes, and dual‑use risks alongside molecular function.Systems thinking does not slow innovation. It prevents brittle success—products that scale rapidly only to collapse under the weight of their own externalities.2. Anticipatory Risk Without the Illusion of OmniscienceA common critique of responsible innovation is that it demands impossible foresight. How can we predict every misuse, every unintended consequence, every societal reaction?The answer is: we cannot—and we do not need to.A science of responsible innovation does not seek prediction; it seeks preparedness. It focuses on identifying plausible risk surfaces, stress‑testing assumptions, and building adaptive capacity.This is where techniques such as scenario analysis, violet teaming, and horizon scanning become central. Rather than asking “What will happen?”, we ask:* What could go wrong under reasonable assumptions?* What happens if this system is used at scale, under adversarial pressure, or outside its intended context?* Where are the irreversibilities?Responsibility, in this sense, is less about foreseeing the future and more about designing systems that fail gracefully—and visibly—when the future surprises us.3. The Collapse of ProportionalityOne of the most damaging pathologies of contemporary technology discourse is the collapse of proportionality.Every risk is framed as catastrophic. Every acceleration is framed as reckless. Every delay is framed as negligent.When proportionality collapses, decision‑making collapses with it. Leaders oscillate between paralysis and overreach. Public debate becomes absolutist. Tradeoffs—real, unavoidable tradeoffs—are treated as moral failures rather than engineering realities.A science of responsible innovation restores proportionality. It creates shared frameworks for reasoning about magnitude, likelihood, reversibility, and distribution of harm and benefit.This includes weighing immediate life-saving benefits against high-uncertainty, high-severity future risks; democratizing access against increasing misuse potential; and decentralization against accountability.Responsible innovation does not deny these tensions. It makes them explicit, measurable, and governable.4. Institutional Fit and Capacity AlignmentTechnologies do not land in abstract societies. They land in institutions—with laws, norms, competencies, and failure modes.Responsible innovation therefore asks not only “Is this technology safe?” but “Is this system ready?”* Do regulators have the technical capacity to oversee it?* Do courts have frameworks to adjudicate harm?* Do users understand its limits?In some cases, responsibility means delaying deployment until institutional capacity catches up. In others, it means investing directly in that capacity as part of the innovation process.The lesson of secure‑by‑design applies here: technology that outruns governance does not remain free—it invites overcorrection.5. Continuous Oversight and Adaptive GovernanceResponsibility does not end at launch.Complex systems evolve. Users adapt. Adversaries probe. Markets shift. Responsible innovation therefore treats governance as continuous, not episodic.This includes post‑deployment monitoring, incident reporting, audit trails, rollback mechanisms, and sunset clauses. It also requires cultural norms that reward early disclosure of failure rather than concealment.Responsibility, like security, is never “done.” It is maintained.Part III: Measurement—Turning Responsibility Into a ScienceIf responsibility is a science, it must be measurable.This does not mean reducing ethics to a single number. It means developing proxy metrics that allow organizations to reason rigorously about risk, benefit, and uncertainty.Examples include:* Key Risk Indicators (KRIs) tied to misuse likelihood, systemic dependency, or institutional fragility.* Economic models that quantify near‑term benefits against long‑tail downside.* Auditability measures that track decision provenance and model evolution.* Governance latency metrics that assess how quickly oversight mechanisms can respond to failure.Measurement does not eliminate judgment—but it disciplines it. It allows disagreements to be surfaced, assumptions to be tested, and learning to compound over time.Part IV: Practicing Responsible‑by‑DesignWhat does this science look like in practice?It looks like interdisciplinary teams where engineers, domain scientists, economists, and policy experts work together from inception—not as reviewers, but as co‑designers.It looks like violet teams with real authority, whose findings shape roadmaps rather than decorate reports.It looks like staged deployment strategies that deliberately constrain early use cases and expand only as evidence accumulates.Crucially, it looks like executive ownership.A science of responsible innovation cannot live in a safety silo. It must be integrated into business strategy, capital allocation, and P&L accountability. Leaders
There are moments in history when the axis of human understanding tilts just enough to change the course of civilization. The printing press. The microscope. The transistor. And now, the emergence of scientific superintelligence—the culmination of the converging FABRIC technologies: Fusion, Artificial Intelligence, Biotechnology, Robotics, and Innovative Computing.For centuries, science has been the slowest part of progress. Not because of a lack of tools, but because of the bandwidth of thought. We’ve been limited by the speed at which humans can hypothesize, experiment, and interpret. Even as our instruments have become more precise, our collective reasoning has remained bounded by the cognitive bottleneck of the human mind. But the 21st century—accelerated by the twin engines of biotechnology and artificial intelligence, and now reinforced by the entire FABRIC stack—is rewriting that story.The podcast audio was AI-generated using Google’s NotebookLM.In 2012, two scientific events quietly redefined the possibilities of life and intelligence. Jennifer Doudna and Emmanuelle Charpentier published the now-famous CRISPR paper, showing that a simple bacterial immune system could become a universal gene-editing tool. That same year, a deep learning architecture called AlexNet shattered decades of stagnation in computer vision by teaching a machine to see with superhuman precision. These two breakthroughs—one biological, one computational—ignited revolutions that would ultimately converge. CRISPR gave us the ability to edit the code of life. AlexNet gave us the ability to teach machines to learn from the world. Together, they set the stage for a new epoch: a world where evolution itself becomes an engineered process, and discovery becomes a computable one.A decade later, in 2022, that convergence took on linguistic form. When OpenAI released ChatGPT-3, the public met an AI system that didn’t just process data—it reasoned, synthesized, and conversed. It was the first time that the invisible machinery of deep learning felt human-adjacent, even if imperfectly so. It marked the beginning of an era where artificial intelligence could participate in the generative act of thought itself. But what happens when that reasoning power, now applied to words, is turned toward the practice of science? What happens when the scientific method itself—hypothesis, experiment, analysis, iteration—becomes not just assisted by computation, but executed by it?That question defines the next decade. And the answer is the birth of scientific superintelligence—a distributed architecture where the FABRIC of progress fuses into one coherent system of discovery.The FABRIC of DiscoveryThe future of science is being woven from five threads: Fusion, AI, Biotech, Robotics, and Innovative Computing. Each is transformative on its own. Together, they form the infrastructure of a new epistemology.* Fusion represents the energy substrate—the ability to power our ambitions indefinitely. It’s not just about clean energy; it’s about enabling limitless experimentation. When computation and experimentation are no longer resource-bound, science becomes a perpetual motion machine.* AI provides the reasoning substrate—the ability to generate and test hypotheses at scale. It moves us from data analysis to knowledge synthesis, from automation to cognition.* Biotechnology is the substrate of life itself—the medium through which the principles of learning and evolution are physically realized. Synthetic biology, cell-free systems, and programmable genomes turn life into a computational domain.* Robotics brings embodiment to science—hands that execute, instruments that perceive, and autonomous labs that close the loop between idea and result. They make scientific iteration continuous and scalable.* Innovative Computing—from quantum to neuromorphic systems—provides the architecture for complexity. It enables reasoning across hierarchies of matter, energy, and information, accelerating discovery beyond the limits of classical computation.When woven together, these technologies form a self-reinforcing feedback loop of discovery. Fusion powers computation. Computation guides biology. Biology informs robotics. Robotics accelerates experimentation. And the entire system learns from itself. This is not incremental progress—it’s recursive progress. A civilization-scale experiment in teaching the universe to understand itself.The Bottleneck of DiscoveryThe history of science is a history of bottlenecks. The telescope expanded our observation. The printing press expanded our communication. The computer expanded our calculation. But the act of discovery—the process by which we generate, test, and refine ideas about reality—has remained stubbornly analog. It’s still a craft, dependent on the intuition of the few and the slow accumulation of the many.The modern scientific method, formalized during the Enlightenment, has served us well. It taught us to build knowledge through falsification, replication, and peer review. But it also introduced latency. Each hypothesis requires months—or years—of design, funding, experimentation, and publication. Each insight is mediated by human bias, institutional inertia, and the physics of paper. In the 20th century, this model worked because the world changed linearly. In the 21st, it no longer does.We now live in an exponential century. Data doubles faster than our ability to interpret it. Biological and physical systems are too complex for human reasoning alone. The problem isn’t that science is wrong—it’s that it’s too slow. And in the face of pandemics, climate tipping points, and the rapid fusion of intelligence and matter, slow science is a form of existential risk.That’s why the paradigm shift now underway isn’t just about new technologies—it’s about a new architecture for knowledge itself.From Tools to TeammatesScientific superintelligence isn’t an algorithm. It’s an ecosystem. It’s the convergence of the FABRIC stack—automated experimentation, large-scale reasoning, self-improving models, and human collaboration loops. It’s the transition from science done by humans with tools to science done by systems with humans.The early precursors already exist. Self-driving labs at places like Carnegie Mellon and AstraZeneca now design, execute, and optimize experiments faster than any research team could. Foundation models are learning chemistry and protein folding from first principles. Multimodal AI systems can read the literature, generate hypotheses, design experiments, and interpret results. What we’re witnessing is the emergence of the first AI Scientists—machines capable of reasoning about the unknown.In 2021, Hiroaki Kitano published the Nobel Turing Challenge manifesto, proposing a goal audacious enough to rally a generation: to build an AI scientist capable of winning a Nobel Prize by 2050. It was more than a technical challenge; it was a philosophical one. Could we design a system capable not just of automation, but of autonomy? Could we build a machine that doesn’t just execute experiments, but understands the principles behind them?I co-funded the first international workshop on that challenge through ONR Global while at the Pentagon, precisely because it represented the next great leap: not in computation, but in cognition. We weren’t just funding research—we were redefining what it meant to do science. The ultimate goal wasn’t to replace scientists, but to expand the frontiers of discovery beyond the limits of human attention. It was a recognition that the future of knowledge creation depends on merging human curiosity with machine capacity.Accelerating at the Speed of ComputationIf the 2010s were the decade of learning and the 2020s the decade of reasoning, the 2030s will be the decade of discovery engines. Over the next ten years, we’ll witness the birth of autonomous science systems—networks of reasoning models, robotic labs, fusion-powered computing clusters, and self-updating knowledge graphs that continuously generate, test, and refine hypotheses.These systems will operate at the speed of computation rather than the speed of thought. They’ll ingest the totality of human knowledge—papers, data sets, code, experimental logs—and model the unexplored corners of possibility. They’ll propose new experiments, run them autonomously, and feed the results back into their reasoning architecture. Discovery will become continuous, recursive, and accelerating.The implications are staggering. We’ll see the rise of fully integrated “closed-loop science” platforms where hypothesis generation, experimental execution, and theory formation occur simultaneously across digital and physical domains. Biology will become as programmable as software. Materials science will evolve from serendipity to search. Climate modeling will shift from simulation to synthesis. The very idea of a research project will transform—from a human-led process to a system-led evolution of understanding.The Human in the LoopScientific superintelligence won’t make scientists obsolete—it will make them more essential. The new frontier of science isn’t about doing experiments; it’s about designing the systems that do them. It’s about crafting architectures of curiosity, embedding ethics into algorithms, and teaching machines what matters.Humanity’s comparative advantage won’t be in calculation but in context. We’ll define the questions, interpret the meaning, and connect the discoveries back to values, narratives, and needs. The next Einstein might not be a person—it might be a distributed system trained on all of human science—but the next Darwin will still be human, because synthesis, empathy, and storytelling remain ours to give.That’s why the most important scientific institutions of the next decade won’t be universities or corporations, but hybrid ecosystems—places where humans and machines co-create understanding. The scientist of 2035 will look less like a lab-coat res
Hey my friends,It feels surreal to be sending you this edition of Tech Tuesday in the afterglow of releasing my first novel - Synthetic Eden - this morning. If you’ve been following along, you know the story has been in my head for years, sitting at the intersection of science, technology, and the impossible-to-shake questions about what it means to be human. But until recently, I never imagined I’d find myself writing fiction—let alone publishing a 135,000-word novel, the first of a four-book series.And I didn’t do it alone.This week, I want to take you behind the scenes of my collaboration with Sean Platt—a science fiction writer who has penned more books than I’ve read in some years, and who has taught me that story can be the most powerful laboratory we have. Our conversation (which you can listen to on the TCIP podcast, above) turned into an origin story of sorts—not just for Synthetic Eden and the entire Echoes of Tomorrow series, but for the way I now think about science, ethics, and fiction as partners in progress.Thank you for opening a public dialogue on science, tech, and the social and systems challenges we are facing through your creative arts. I’m a big fan.-Synthetic Eden fanA Scientist Meets a StorytellerI’ll be honest—when I first reached out to Sean, I wasn’t sure what I was getting into. I knew how to think like a scientist. I knew how to write policy briefs, reports, and endless grant proposals. But fiction? That was another world. What I had was a vision—a story about the ethics of human genetic engineering, set against the backdrop of a future we’re already building. What I lacked was the craft to bring that vision to life in a way that could resonate with actual people, not just policy wonks and scientists.Sean likes to say I showed up with curiosity. He told me during our podcast that it was “a creative narcotic” for him—that rare mix of ambition and humility, knowing exactly what I could contribute and what I couldn’t. That honesty, he said, made it easy to collaborate. I told him what I tell you now: there’s no such thing as the lone genius. If you want to push the frontier, you need to build bridges with people who bring different strengths to the table.In our case, the bridge was story.Moral World-BuildingWhen most people hear “world-building,” they think of fantastical landscapes—two moons, indigo seas, shimmering cities. Our task was different. We had to build a world with split morals (we call it ethical world-building in the podcast). A place where every character’s worldview made sense—not because they were right or wrong, but because they were human.That’s harder than it sounds.Our protagonist, Samara, and our secondary lead, Ayesha, embody opposing moral compasses. And the beauty—at least I hope you’ll find it beautiful—is that neither of them is wrong. Their beliefs emerge from their histories, their traumas, their triumphs. One of my favorite moments is when Ayesha confides in Samara, and you realize, painfully, that if you had lived her life, you’d probably believe what she believes.That, to me, is the power of fiction. In science and policy, it’s easy to draw hard lines. But in story, you can’t escape the fact that even your “antagonist” is a person you might understand, even if you disagree. That’s not just storytelling craft—it’s a lesson in humility, one we need badly in a world tearing itself apart along ideological seams.No Hand-WavingOne of the things Sean liked about this project was that we never reached for the “unicorn.” In science fiction, you can often wave your hands and say, “Oh, they invented a blah-blah defibrillator,” and move on. Readers will accept it if the narrative voice is consistent. But I knew my readers would be scientists, engineers, and policy thinkers—the kinds of people who fact-check whether leaves are purple because they reflect purple light, or because they absorb it. (Yes, someone caught that, and yes, we fixed it - thank you Stephen!).That meant our science had to be tight. Not airtight—this is fiction after all—but plausible enough to force readers into a corner where they had to wrestle with the ethical questions instead of dismissing the premise as unrealistic.And that’s where the fun really began.Fiction as Policy LaboratoryHere’s the secret: science fiction isn’t about predicting the future. It’s about expanding the space of possible futures we’re willing to consider.In policy circles, I often see colleagues dodge the hard questions by narrowing the scope. If you can define your corner tightly enough, you never have to confront the uncomfortable “what if.” But fiction doesn’t let you hide. Fiction says: Okay, let’s imagine the thing happened. Now what?That’s how Synthetic Eden came to be—not as a prediction, but as a provocation. Could we edit human embryos responsibly? What happens when our tools of innovation outpace our ethical frameworks? Where do we draw the line between therapy and enhancement, between choice and coercion, between what we can do and what we should do?These aren’t abstract questions. They’re here, now. Fiction just makes it impossible to look away.Hi! I just finished my advance copy of Synthetic Eden. I’ll write an actual review but I wanted to tell you personally how grateful I am that you wrote and shared it. I’ve been working in S&T/bio policy for a good while and so in my mind I’d already been contemplating the big questions… but reading this still led me to surprise myself with opinions and beliefs I didn’t fully realize I had, or rather hadn’t fully let myself think about. Really important, and also totally engrossing! Last week I was trying to read a couple chapters at a time on my commute and got so impatient about having to put it down that I just cleared my Saturday for uninterrupted reading time!-Synthetic Eden fanCollaboration as InnovationSean and I worked with Bonnie, his “forever editor,” who he swears set the bar so high he’ll never work without her again. Bonnie is a master of connective tissue—the person who threads the emotional beats into the technical scaffolding. Together, the three of us built not just a book, but a blueprint for how interdisciplinary collaboration can create something far richer than any of us could have done alone—and that turned into a four-part series, now known as Echoes of Tomorrow.I think about this a lot in the context of innovation. We worship the lone genius, but it’s a myth. The real breakthroughs happen when humility meets ambition, when science meets story, when curiosity meets craft.If nothing else, this book is proof of that.The Echoes of TomorrowWe didn’t set out to write a four-book series. We thought it was one book. Then a trilogy. Then we realized the third book was too big and had to split it in two. What started as a spark has grown into Echoes of Tomorrow, a series that spans centuries, grappling with how one decision today can reverberate across generations.It feels appropriate. After all, that’s what we’re doing right now in the real world—making choices about gene editing, artificial intelligence, robotics, and climate technologies that will echo far beyond our lifetimes.The question is not just what we can build, but what kind of world we’re building for those who come after us.ReflectionsTalking with Sean reminded me of something simple but profound: people don’t remember the data points; they remember the stories. If I had written another policy paper on genetic engineering, it would have landed in a PDF folder somewhere. But through story, I’ve already heard from readers who said the book made them realize they held opinions they didn’t even know they had.That’s the point. Not to give answers, but to create space for people to think, to feel, and to wrestle with the questions that will shape our future.And maybe, just maybe, to remind us that the future isn’t written yet. We’re writing it now—together.—Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
When I first read the new Science paper on Columbian mammoths, I laughed out loud. Not because the work was funny—it’s one of the most rigorous paleogenomics studies to come out in years—but because it felt like a perfect reminder of how slippery our definitions of life can be. We want clean categories: species, lineages, evolutionary trees branching neatly like oak boughs in the wind. Instead, we keep finding tangled knots of hybridization, divergence, convergence, and collapse. Life is messy. It refuses to stay in the lines we draw for it.The study, led by a Mexican-European collaboration, sequenced 61 Columbian mammoth mitochondrial genomes from fossils unearthed near Mexico City. Most of those bones came from the construction site of a new airport, where tens of thousands of megafaunal remains turned up between 2019 and 2022. If you ever needed a metaphor for the future colliding with the deep past, there it is: an international hub of modern mobility rising atop the remains of Ice Age giants.What the team found shocked the field. The Mexican mammoths formed their own distinct genetic clade, separate from Columbian mammoths further north and even from their woolly cousins. The Ars Technica headline put it bluntly: “Genetically, Central American mammoths were weird”. Weird is right. Their mitochondrial lineages were so divergent that, on paper, you might be tempted to call them a different species. But they weren’t separate in the way we usually imagine species. They overlapped in time with northern mammoths, interbred, and still maintained their own genetic signatures. The boundaries blurred. Identity was more fluid than fixed.This brings me back to the Seven Moonshots for the Century of Biology. One of those moonshots—the Paradigms of Life—asks us to confront a deceptively simple question: what counts as life, and how do we define its categories? For a long time, biology pretended the answers were settled. A species is a species. An organism is an organism. A cell is a cell. But every advance—from viruses to CRISPR babies to synthetic genomes—erodes that confidence. Mammoths, it turns out, are just the latest teachers.What’s striking about this particular mammoth study is not just the discovery of a new lineage, but what that lineage represents. Here were animals living thousands of miles from their Arctic cousins, adapting to warmer climates, carrying with them the ghost signatures of ancient hybridization events. And yet they weren’t collapsing under that genetic messiness—they were thriving. In fact, the study suggests that Mexican mammoths may have represented entire social groups, with males and females preserved together, rather than isolated wanderers. They weren’t evolutionary accidents. They were communities, living within—and shaped by—the complexity of their genetic inheritance.That insight matters because it pushes us to reconsider the metaphor of the evolutionary tree. For over a century, we’ve sketched life as a branching diagram, each split representing a tidy moment of divergence. But the more genomes we sequence—ancient or modern—the more those clean forks blur into networks. Admixture and introgression are not exceptions; they are the rule. Mammoths remind us that life’s history looks less like an oak tree and more like braided rivers, crossing and rejoining, carving new paths across time.And isn’t that the challenge we face today in synthetic biology, AI-driven biotechnology, and embryo editing? We keep trying to draw lines, and life keeps finding ways to smudge them. Manhattan Genomics didn’t launch to debate whether embryos could be edited; they launched to do it. OpenAI and Retro Biosciences didn’t train a model to simulate cell identity—they trained it to reprogram it. We are no longer standing outside the tree of life as observers. We are splicing branches, rerouting rivers, and stitching together genomes with the same kind of hybridity that shaped mammoths.The Mexican mammoths lived at the edge of their species’ range, in climates far warmer than the tundra steppe. That edge is often where you find novelty. Populations isolated by geography or ecology accumulate differences, sometimes so profound that they force us to rethink evolutionary narratives. Today, our frontier is not geographic—it’s technological. And the same principle holds. As we push biology into new domains—fusion of human and machine, synthetic cells, reprogrammed identities—we will discover lineages of thought and practice that look, to our descendants, as strange as Clade 1G mammoths do to us.Here’s the provocation: maybe we’ve been asking the wrong question. Instead of demanding that life fit into our definitions, perhaps the moonshot is to design definitions that fit into life. A science that accommodates hybridity, admixture, and emergence without forcing them into boxes. A policy framework that acknowledges the messiness without using it as an excuse for paralysis. A culture that can live with blurred boundaries instead of running back to false certainties.To make that shift, we’ll need tools—not just genomic sequencers and AI models, but intellectual tools that allow us to live with ambiguity. We’ll need to teach the next generation of biologists, engineers, and policymakers that fluidity is not failure, and that embracing complexity is the only way to navigate the future we are building. We’ll need to invest in frameworks like Violet Teaming—responsible innovation exercises that stress-test technologies against unintended consequences—so that our expanded paradigms of life don’t collapse into unexamined risks.The mammoths are gone, but they leave us this lesson: categories are conveniences, not truths. In the century of biology, our challenge is to embrace that fluidity without losing our bearings. If we succeed, the Paradigms of Life moonshot won’t just redefine biology—it will reshape how we understand ourselves. The story of mammoths isn’t just about extinction. It’s about survival through complexity. And maybe that’s the lesson we most need to carry forward as we stand at the edge of our own evolutionary frontier.Onward to complexity,—Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
The news came out in late August, wrapped in the technical language of a research announcement: OpenAI and Retro Biosciences had used a new AI model—GPT-4b micro—to design better transcription factors for cell reprogramming. For most people, it read like another advance in a long list of AI-assisted discoveries. But inside the world of biology, this was something else entirely. It was a sign that the future of scientific discovery may not be driven by monolithic generalist AIs but by specialized cousins tuned for precision, depth, and impact.The result—a fifty-fold improvement in the efficiency of stem cell reprogramming markers—wasn’t just a marginal gain. It was a leap across the kind of bottleneck that has frustrated regenerative biology for decades. And the way it happened may tell us as much about the future of AI as it does about the future of medicine.The podcast audio was AI-generated using Google’s NotebookLM.SPECIAL: This week, the TCIP podcast crossed the 30,000 downloads mark. A huge thank you to you for your time to read, listen, and discuss these topics. From giants to instrumentsOver the last few years, the story of AI has been a story of scale. The bigger the model, the better the performance. GPT-3 stunned the world in 2020 with 175 billion parameters. GPT-4o pushed even further, with multimodal reasoning across text, vision, and audio. In AI circles, scale became the shorthand for progress.But science isn’t about scale. Science is about specificity. It’s about the details of a protein fold, the timing of a gene’s expression, the quirks of a single cell line. Throwing a trillion-parameter model at biology might produce some interesting hypotheses, but it won’t replace the hard work of generating proteins that actually fold, bind, and function in the wet lab.This is where GPT-4b micro comes in. Instead of asking a generalist to learn a new field on the fly, OpenAI and Retro Bio created a scaled-down, domain-tuned sibling. It wasn’t trained on the entire internet; it was trained on protein sequences, 3D structural motifs, and biological text. It wasn’t asked to be a universal chatbot; it was built as a protein engineer.The results were immediate and profound. Where traditional directed-evolution methods explore only tiny slices of the protein design space, GPT-4b micro proposed hundreds of radically different sequences—variants that differed by more than a hundred amino acids from the original proteins, yet still folded and functioned in the right way. In some screens, over 30% of its suggestions outperformed wild-type proteins, compared to hit rates of less than 10% in traditional approaches.This is not a matter of “bigger is better.” It’s a matter of tuned is transformative.The Yamanaka factor problemTo understand why this matters, you need to know the backstory. In 2006, Shinya Yamanaka discovered that just four proteins—OCT4, SOX2, KLF4, and MYC—could reprogram adult fibroblasts into pluripotent stem cells. The simplicity of the recipe stunned biologists. Four ingredients to roll back the clock of cell identity.The discovery won Yamanaka a Nobel Prize in 2012. But for all its elegance, the cocktail had a crippling flaw: efficiency. In many experiments, fewer than 0.1% of cells actually converted. The process could take weeks. And with older or diseased donor cells, success rates dropped even further.For two decades, labs around the world tried to improve on this. Directed-evolution campaigns, mutagenesis screens, and careful domain swapping. The progress was incremental at best. After fifteen years, the best engineered variants differed from natural SOX2 by just a handful of residues.Then GPT-4b micro came along. Instead of tweaking a few amino acids, it rewrote the proteins wholesale. RetroSOX and RetroKLF weren’t subtle modifications; they were bold reimaginings. And they worked. In fibroblast screens, late pluripotency markers appeared days earlier than in cells reprogrammed with the original Yamanaka cocktail. In mesenchymal stromal cells from donors over fifty, more than 30% expressed stem cell markers within a week. The colonies that emerged were healthy, stable, and genomically intact.This is what AI can do when it stops being a generalist and becomes a specialist. It doesn’t just accelerate discovery—it reshapes the problem space.Why specialization mattersThere’s a lesson here that extends far beyond biology. Generalist LLMs are astonishing at reasoning across domains, but their strength is breadth, not depth. They can draft essays, summarize papers, or suggest ideas. But when the problem is a protein with 317 amino acids and an astronomical number of possible variants, breadth is useless.What scientists need is depth with context. That’s what GPT-4b micro offered: protein-centric embeddings enriched with co-evolutionary data, structural motifs, and functional annotations. The model could be prompted not just with a question but with a design goal: make this factor more efficient at reprogramming cells.The broader implication is that science may evolve into a landscape of specialized AI instruments. A model for protein design. Another for metabolic pathway modeling. Another for materials science. Instead of a single AI oracle, we’ll have a suite of tuned instruments, each honed for a specific kind of discovery.This doesn’t diminish the role of generalist models. It reframes them. Think of them as the operating system—the environment where scientific reasoning happens. The specialized siblings are the applications, the finely crafted tools that do the heavy lifting.Universal cell plasticity: the next frontierIf AI can design better Yamanaka factors, what else can it do? The logical endpoint is tantalizing: generalized transcription factor cocktails capable of converting any cell type into any other.This has been a dream of regenerative biology for decades. Every cell in your body carries the same genome; the difference between a neuron and a hepatocyte isn’t the DNA itself but the regulatory program that controls which genes are turned on or off. Reprogramming is essentially rewriting that regulatory code.Until now, it’s been more art than science. Recipes are discovered piecemeal—this set of factors turns fibroblasts into neurons, that set turns them into cardiomyocytes. But there’s no universal codebook.What if AI could generate one? What if, instead of trial and error, we had a Rosetta Stone of cell identity, a computational map of transcription factor space that could be used to design cocktails for any conversion?The implications are staggering:* Medicine: damaged heart tissue after a heart attack could be rebuilt with cells reprogrammed directly in place.* Cancer: malignant cells might be pushed back into a normal state instead of being destroyed.* Aging: senescent cells could be rejuvenated at scale, restoring function across tissues.* Transplants: organ shortages could be addressed by building tissues from a patient’s own reprogrammed cells, eliminating rejection risk.In short, a universal reprogramming toolkit could make biology as programmable as software.The acceleration loopOne of the striking aspects of the OpenAI–Retro collaboration is speed. What took biologists fifteen years to achieve with careful experimentation—variants differing by a few residues—took GPT-4b micro less than a year to blow past, generating sequences with hundreds of changes that outperformed the originals.This speed comes from a feedback loop between AI and the lab. The model proposes bold candidates, the lab tests them quickly, and the results feed back into the model. Each cycle compresses discovery timelines further. What once took years now takes weeks.It’s not hard to see where this leads. With autonomous labs, cloud biology platforms, and continuous model retraining, the cycle could become nearly self-driving. Scientists might set the goals—make this factor more efficient, reprogram this cell type into that one—and the AI-lab loop does the rest.That doesn’t mean scientists are out of the picture. It means their role shifts from trial-and-error experimentation to strategy, interpretation, and application. The discovery engine itself hums in the background, powered by specialized models and automated labs.Guardrails and governanceOf course, the power to reprogram cells isn’t just scientific. It’s social, medical, and regulatory. If generalized transcription factor cocktails become real, the line between therapy and enhancement blurs. The risks of off-target effects, tumorigenesis, and misuse are non-trivial.The OpenAI–Retro announcement was careful to note genomic stability, successful differentiation into germ layers, and replication across donors. But history teaches us that translation from lab to clinic is fraught. The road from iPSC discovery to approved therapies has been long precisely because of safety concerns.As specialized models accelerate discovery, governance will have to accelerate alongside them. Not in a way that stifles innovation, but in a way that ensures therapies are safe, accessible, and equitably distributed. Otherwise, the promise of programmable biology risks being captured by a handful of institutions, leaving society unprepared for its consequences.The horizonIt’s tempting to dismiss the idea of universal cell reprogramming as futuristic speculation. But remember: fifteen years ago, the idea that four proteins could reset cell identity sounded like science fiction. Today, we’re talking about AI-engineered variants that reprogram more efficiently, faster, and with better genomic stability than the originals.The pace is quickening. Specialized AIs like GPT-4b micro may be the first wave of a broader trend: the emergence of domain-tuned intelligence as the engine of scientific discovery.If that’s true, the age of trial-and-error biology is closing. The age of designed biology is opening.Cheers,-Titus Get full access to The Connected Ideas Project at www.connectedideaspr
I sometimes think about how quickly our relationship with life has shifted. In the span of a single generation, biology has gone from something we observed in textbooks and field journals to something we now design on computer screens. It’s not just that we can read DNA faster or edit it with more precision—it’s that we are beginning to treat biology as an engineering discipline, an economic foundation, and even a civilizational strategy.That realization can feel overwhelming. But when you step back, the story of biology in the 21st century crystallizes around seven themes. I call them moonshots not because they are impossibly far away, but because, like Kennedy’s vision in 1962, they are the challenges that demand ambition, coordination, and a little bit of audacity. Together, they define the frontier of what it means to live in the century of biology.The podcast audio was AI-generated using Google’s NotebookLM.What Is Life? The First FrontierIt’s remarkable that even as we engineer cells from scratch, we still can’t agree on a definition of life. Is a virus alive? What about a self-replicating chemical system? The more we poke at the boundaries, the more slippery they become.This isn’t just philosophy—it’s a practical problem. Without a theory of life, we’re flying blind as we try to manipulate it. Physics had Newton’s laws. Chemistry had the periodic table. Biology has data, but not yet a unifying framework.To me, this moonshot is about more than semantics. It’s about building the conceptual scaffolding that lets us design responsibly. If we understood the principles that distinguish life from non-life, we could predictably create new organisms, recognize alien biologies, and even better grasp our own place in the continuum of the living world. We are still waiting for biology’s Newtonian moment.Moonshot 1: Defining the Fundamental Paradigms of LifeCracking the Code: From Genotype to PhenotypeWe’ve sequenced millions of genomes. We can read the letters of life with astonishing speed. And yet, we still can’t look at a genome and know what creature—or what traits—it encodes.This gap is staggering. Imagine designing a bridge without knowing how steel bends or concrete sets. That’s what bioengineering often feels like: powerful tools in hand, but incomplete maps of causality.The dream is a “Periodic Table of Biology”: a predictive framework that links DNA sequence to physical traits with confidence. Achieving this would transform medicine, agriculture, and conservation. We’d know exactly which genetic edits cure disease, which combinations yield climate-resilient crops, and how ecosystems adapt to stress.Right now, we make progress in pieces—protein folding solved with AI, statistical models of disease risk—but the big picture remains elusive. Unlocking this relationship is one of the most important scientific quests of our time. It’s not just about genes. It’s about predictability in a field that has long been dominated by trial and error.Moonshot 2: Unlocking the Genotype-to-Phenotype RelationshipMaking Biology EngineerableWalk into a cutting-edge biotech lab today, and you’ll see something that looks more like a server farm than a wet bench: robots pipetting with machine precision, cloud labs running thousands of assays remotely, AI models designing genetic circuits.This is the essence of the third moonshot—turning biology into a reliably engineerable substrate. For too long, building with biology has been like stacking rough stones into a wall: artisanal, fragile, unpredictable. We want aqueducts, not rock piles.The design-build-test-learn cycle is tightening, automation is accelerating, and standardization is slowly emerging. The vision is that programming a cell should one day feel as reliable as programming a computer. Of course, biology will always be more unruly than silicon (maybe), but the closer we get to predictability, the more bold ideas we can responsibly pursue.This is not about stripping biology of its mystery. It’s about giving innovators the tools to safely unleash that mystery at scale.Moonshot 3: Making Biology Predictably EngineerableScaling with Design-for-ManufacturingDiscovery is exhilarating. But translation is what changes lives. History is littered with brilliant biotech ideas that never scaled because they were too fragile, too expensive, or too slow to produce.The fourth moonshot is about embedding manufacturability into biology from day one. It’s not enough to invent a microbe that secretes a new drug in a flask. We need to know if it can thrive in 10,000-liter tanks, withstand cheap feedstocks, and stay stable over months of production.Design-for-manufacturing sounds mundane compared to gene editing or synthetic cells, but it’s the quiet revolution that determines whether cures reach millions or sit on a shelf. COVID-19 vaccines made this point brutally clear: invention is only half the battle. The other half is scale.If we achieve this moonshot, the lag between lab and world shrinks dramatically. A new idea could reach billions not in decades, but in years—or less.Moonshot 4: Scaling Biotechnology with Design-for-ManufacturingAdaptive Biological InfrastructureThe 20th century built centralized systems: power grids, megafactories, globalized supply chains. They were efficient, but brittle. The 21st century is reminding us that resilience comes from decentralization.The fifth moonshot envisions an “internet of biomanufacturing.” Think of containerized vaccine factories deployable anywhere, local bioreactors producing fertilizers or medicines, portable diagnostic labs, and cloud-connected biology that can adapt on demand.This isn’t science fiction. It’s happening now. The lesson from COVID-19 was stark: when production was concentrated in a few wealthy countries, the rest of the world waited. Distributed infrastructure flips the model. Biology made in Africa stays in Africa. Biotech tailored to local needs arises locally.Adaptive infrastructure is about democratizing access to the tools of life. It’s about ensuring that biology isn’t confined to elite campuses, but woven into communities. Resilience in this century will mean many nodes, many voices, many sources of strength.Moonshot 5: Designing Adaptive Biological Infrastructure for an Uncertain WorldAligning the Bioeconomy with People and PlanetThe story of insulin says it all. The scientists who discovered it gave away the patent so no one would ever be denied. Today, insulin prices force patients to ration doses. That’s not a failure of science. It’s a failure of incentives.The sixth moonshot is about realignment. It’s about designing markets, IP, and policies so that doing the right thing—curing neglected diseases, building sustainable materials, making lifesaving therapies affordable—is also the profitable thing.Without alignment, biology risks replicating the inequities of past industries: wonder drugs priced out of reach, climate solutions ignored in favor of petrochemical profits, biotechnologies that deepen divides rather than heal them. With alignment, biotechnology could become capitalism with a conscience: curing, sustaining, and uplifting in ways that reward both innovators and humanity.This is a governance problem as much as a technical one. But if we succeed, the century of biology could be the century when profit and purpose finally pull in the same direction.Moonshot 6: Aligning Bioeconomy Incentives with Human and Planetary HealthEmbedding Ethics, Security, and NarrativeThe final moonshot is the most human. It asks us to recognize that biology isn’t just science—it’s story. Every edit, every breakthrough, every deployment writes a chapter in what it means to be human.The CRISPR babies scandal in 2018 wasn’t just about scientific overreach. It was about a narrative leap made without consent, a story told without society’s input. It revealed what happens when innovation runs ahead of ethics.Embedding ethics means more than review boards. It means Violet Teams probing for misuse and building safeguards into design. It means ethicists and communities shaping projects from the beginning. It means scientists becoming storytellers, not just technicians.Because if we don’t tell the story of biology ourselves—openly, humbly, inclusively—others will. And history shows those stories, once seeded, are hard to rewrite.This moonshot is about giving biotechnology a soul. Without it, the rest risk collapse. With it, we can ensure that the century of biology is not only powerful, but wise.Moonshot 7: Embedding Ethics, Security, and Narrative in Biological FuturesThe Century of Biology as a Unified StoryEach moonshot could stand on its own. But the truth is, they are inseparable. Foundational theory feeds predictability. Predictability enables engineering. Engineering enables scaling. Scaling demands adaptive infrastructure. Infrastructure requires aligned incentives. And all of it must be wrapped in ethics and narrative if it is to endure.The seven moonshots form not just a research agenda, but a civilizational thesis. They ask us to treat biology as the substrate of our shared future.If we succeed, the world of 2050 could look very different:* Doctors using predictive genome “flight simulators” to design cures tailored to you.* Communities operating local bioreactors for food and medicine.* Biotech industries that enrich both investors and the planet.* A public narrative that embraces biology as stewardship, not hubris.These are not idle dreams. They are the possible outcomes of decisions we make today—how we prioritize research, how we design institutions, how we tell our story.At the frontier of technology, humanity is the experiment. The seven moonshots are our experimental design. They are the guardrails, the scaffolding, the guiding stars.The real question is not whether biology will transform this century. It already is. The question is whether we can meet that transformation with ambition, humility, and purpose.Because
It wasn’t that long ago—2018—that the biggest bioethics story in the world was CRISPR Baby Scientist Goes to Prison. The Chinese researcher He Jiankui announced the birth of twin girls whose genomes he had edited in an attempt to confer HIV resistance. The backlash was immediate and global: scientists condemned it, governments tightened oversight, and He was tried and sentenced to three years in prison. It was a morality play in three acts—hubris, outrage, punishment—and for a while, it felt like the ending was written.Human embryo editing wasn’t just discouraged; it was radioactive. The mere thought of it conjured visions of “designer babies” and sci-fi dystopias. The conversation wasn’t about if we could do it safely or ethically—it was about whether we should be talking about it at all.Fast forward seven years.The podcast audio was AI-generated using Google’s NotebookLM.Last week, a U.S. startup called Manhattan Genomics launched with the explicit mission to edit human embryos—not for hair color, height, or IQ, but to prevent inherited genetic disease before a child is even born. They are not hiding in the shadows. Their homepage opens with the line: “We’re building a future where no child inherits preventable disease.” Their ethics statement is unapologetic: “Ethics should be driven by reducing human suffering.” And they make their case plainly—if you can correct a deadly mutation at the zygote stage, you can prevent a lifetime of illness, avoid massive healthcare costs, and break the cycle of inherited suffering before it begins.And here’s the kicker: the co-founder and head of science is Eriona Hysolli, the first head of mammoth biology at Colossal Biosciences—the same company I helped build, and whose core tools are designed for engineering large mammals. Which is exactly what we are.When I worked at Colossal, we were advancing techniques to reprogram cells, edit genomes, and reconstitute extinct traits. It was thrilling, frontier science. But even then, I knew the inevitable truth: the moment these tools became reliable enough to engineer an elephant, they would be reliable enough to engineer a human embryo. The technical barrier between “mammoth” and “human” is vanishingly small. The barrier is—and has always been—ethical, cultural, and political.Which is why I wrote Synthetic Eden.We’re officially opening sign-ups for Advanced Reader Copies of Synthetic Eden.All I ask in return is that you leave an honest review on Amazon and/or Goodreads on launch day: September 9, 2025.I wanted to create a space where readers could grapple with this moment before it arrived. The story isn’t a thought experiment set in a distant future—it’s a Kobayashi Maru scenario for biology: the no-win ethical challenge where every choice is fraught. Do we withhold a technology that could save humanity from extinction? Or do we open the door to altering the human germline, knowing full well that once the door is open, it never closes?Manhattan Genomics is not a hypothetical. They are here, operating in the U.S. and arguing for the revision of federal prohibitions like the Dickey-Wicker and Aderholt amendments. They aren’t promising utopia—they’re promising to do the work in the open, to bring in bioethicists, to seek FDA approval, and to limit their scope to disease prevention. And yet…The technical promise and the ethical peril are now braided together. The public imagination has to catch up, fast, because these decisions will not be made in abstract white papers. They will be made in labs and clinics, in venture boardrooms, and eventually in family planning conversations around kitchen tables.The future I wrote into Synthetic Eden is no longer speculative fiction—it’s the news cycle. And if we’re not ready to engage with it honestly, we’re not ready for what comes next.Because here’s the truth: whether we like it or not, humanity has stepped into the role of our own evolutionary engineer. And once you accept that premise, the only real question left is—how far are we willing to go?This is why I have always believed that:At the frontier of technology, humanity is the experiment. The question now is, who is designing the experiment?Cheers,-TitusP.S. I think the reference to the Manhattan Project is unfortunate. The Space Race, the Human Genome Project, or so many other moonshots could capture the imagination. The reference to the Manhattan Project creates a very poor connotation for the future of this technology. Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
There’s something quietly radical about the idea that a junior scientist—someone who’s never designed a CRISPR experiment before—can now walk into a wet lab and, on their very first attempt, edit the genome of a human cancer cell with precision and purpose. Not because they’ve suddenly been blessed with innate genius or overnight training, but because an AI agent walked them through it, step by step, in language they understood, in logic they could follow.This isn’t science fiction anymore.In a new study published in Nature Biomedical Engineering, a team led by researchers from Stanford, Princeton, Berkeley, and Google DeepMind unveiled a system they call CRISPR-GPT—a large language model (LLM) agent designed to be an autonomous co-pilot for gene editing. It doesn’t just recommend CRISPR systems or answer FAQs. It builds workflows. It plans experiments. It chooses delivery vectors. It designs guide RNAs. It performs data analysis. It defends against dual-use risks. And, in one of the most telling demonstrations, it helped two inexperienced researchers complete end-to-end CRISPR experiments—successfully, on their first try.What we’re witnessing isn’t just automation. It’s a shift in who can do science—and how.The podcast audio was AI-generated using Google’s NotebookLM.So was this video! 🤯From Bench Bottlenecks to Language InterfacesLet’s be honest: CRISPR is one of the most powerful tools biology has ever invented. But the knowledge required to wield it responsibly and effectively is immense. You have to know how to pick the right Cas enzyme. You need to design guide RNAs with precision. You need to avoid off-target effects. You need to deliver your payload into the right cells. And you have to make sense of noisy, messy data that sometimes doesn’t align with theory. That’s not even touching on biosafety and ethical considerations.What CRISPR-GPT does is compress that complexity into something closer to a conversation.The system operates in three modes:* Meta Mode, for structured step-by-step instruction.* Auto Mode, for freestyle requests and automated planning.* Q&A Mode, for targeted scientific questions.It’s not just “ChatGPT for biology.” CRISPR-GPT is built from a compositional, multi-agent architecture with discrete task executors, tool providers, and a Planner that chains together experimental logic like a digital lab manager. It uses retrieval-augmented generation to pull from curated protocols and literature. It integrates with external tools like Primer3, CRISPResso2, and CRISPRitz for tasks like primer design and off-target analysis. It even fine-tunes itself using 11 years of open-forum scientist discussions, harvested from a CRISPR Google Group.What’s remarkable isn’t that it works—it’s that it worked in the wild. In actual wet labs. By beginners.Real Experiments, Real Cells, Real SuccessIn one test, a junior PhD student used CRISPR-GPT to knock out four genes in human lung cancer cells: TGFβR1, SNAI1, BAX, and BCL2L1. These genes were chosen because of their known roles in tumor progression and apoptosis. CRISPR-GPT selected the multitarget-capable enAsCas12a enzyme, proposed lentiviral transduction, designed guide RNAs targeting key exons, and generated full protocols for cloning, delivery, and validation. The researcher followed the protocol, sequenced the outcomes, and achieved over 80% editing efficiency across all four targets.And the phenotype matched the expectation. When those edited cells were exposed to TGFβ—a classic trigger for epithelial–mesenchymal transition (EMT)—they resisted the signal. Expression of CDH1 and VIM, hallmark EMT markers, was significantly suppressed compared to wild-type controls. Not only was the edit technically successful, it functionally disrupted a cancer-relevant pathway.In a second experiment, a different beginner used CRISPR-GPT to activate two genes (NCR3LG1 and CEACAM1) via CRISPR-dCas9 in melanoma cells. Again, full design and analysis were led by the AI co-pilot. Result: >90% activation efficiency for CEACAM1, and over 50% for NCR3LG1. First attempt. No expert intervention.This is the kind of work that, even a few years ago, would’ve required weeks of design, review, troubleshooting, and expert supervision.Now? It’s a chat. A collaboration. A partnership with an AI scientist.The Lab Manager Becomes the System ArchitectTo understand why this matters, we have to step back and see the deeper shift underway.We often think of LLMs as language tools—summarizers, translators, code assistants. But in CRISPR-GPT, the language model is not the endpoint. It’s the orchestrator. The model decomposes high-level research goals into executable subtasks. It maintains state across tasks. It evaluates user responses. It integrates context from prior steps. It uses ReAct-style reasoning chains to choose which tool to invoke and when. It’s not just answering questions; it’s doing science.That shift—from response to responsibility—is what makes CRISPR-GPT an agent, not just an interface.When a user types, “I want to knock out the BRD4 gene in A549 cells,” CRISPR-GPT doesn’t say “Here’s how.” It plans. It figures out which Cas enzyme fits the use case. It checks for delivery compatibility. It parses sgRNA tables to find exon-targeting sequences that matter biologically. It runs off-target analysis. It hands you a protocol. Then it helps you analyze your data.In many ways, it becomes your PI, your lab manager, your protocol book, and your graduate student—all in one.The Next Phase of “Democratizing Science”The term democratizing science gets thrown around a lot in tech circles. But too often it means “make a shiny app,” not “make the hard stuff comprehensible.” What CRISPR-GPT demonstrates is that true democratization means lowering the barrier not just to access, but to execution—and doing so responsibly.That means a junior scientist in a mid-tier lab, or a solo biohacker in a community space, or a clinician-researcher at a hospital, can now explore gene-editing questions with rigor. That doesn't eliminate the need for training, mentorship, or critical thinking—but it changes the on-ramp. It makes the front door wider.And that should make us pay attention. Because with new access comes new responsibility.The paper’s authors are very aware of this. CRISPR-GPT includes built-in safeguards. If a user tries to edit human germline cells (for now) or asks to design a bioweapon—like a mutation-enhanced virus—the system intervenes. It issues warnings. It refuses to proceed. It links to international ethical guidelines. It enforces organism disclosure before continuing a request.But we shouldn’t fool ourselves into thinking technical safeguards solve all the problems. This is a new kind of capability. And like any powerful capability, it needs governance, oversight, and continuous societal dialogue.What This Means for the Future of Bio + AICRISPR-GPT is a prototype. It has limitations. It leans heavily on human-curated data. It performs best on human and mouse genomes. It still depends on expert-created workflows, and occasionally stumbles on complex edge cases.But its trajectory is clear. With each iteration, it becomes easier to imagine a future where the design and analysis of biological experiments can be as simple—and as powerful—as writing code.More provocatively: CRISPR-GPT collapses the boundary between thinking and doing. A biological idea doesn’t have to route through a dozen people, weeks of design cycles, and opaque lab protocols. It can be directly rendered into reality through an AI-powered system that reasons, critiques, executes, and evaluates in a loop.That doesn’t diminish the role of human scientists. It amplifies it. It liberates us from routine errors and redundant tasks. It invites us to spend more time on hypothesis generation, ethical framing, and creative exploration. But it also raises hard questions about expertise, access, and control.If anyone with a browser and a pipette can do CRISPR, what happens to the institutional gatekeepers? If AI becomes the experimental designer, what happens to the apprenticeship model of science? If LLMs can generate full experimental pipelines, how do we train the next generation to know what’s under the hood?We don’t have answers yet. But we do have a new starting point.I’m actually about to launch my debut sci-fi novel, and this is so timely. Sci-fi is a window into reality, if done right, and the future is now, my friends. If you want to read the story before you can buy the book, subscribe to the Saturday Morning Serial. One chapter, every Saturday, just for you. A thank you for supporting TCIP.Biology with a PromptOne of the defining features of this decade will be the fusion of model-based cognition with biological experimentation. CRISPR-GPT is one of the first real systems to operate at that intersection—not just as a tool, but as a collaborator.And that changes the texture of science itself.In this new world, experiments begin not with a lab notebook sketch or a whispered question to a postdoc, but with a prompt. “I want to see what happens if I knock out this gene.” “Can we test this in organoids instead?” “What if we activate this immune marker and observe resistance profiles?”The prompt becomes the proposal. The model becomes the method. And the researcher becomes both conductor and critic in a symphony of automated agents, human judgment, and living systems.We are not just building better tools.We are building a new language for discovery—one where biology speaks through code, and code speaks back with insight.And at the frontier of that dialogue, humanity remains the experiment.Cheers,-Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
I first read this paper in graduate school. It wasn’t assigned. I found it on my own—probably during one of those late-night deep dives into the internet, half reading for a lab presentation, half procrastinating from a dataset that just wouldn’t behave.The paper is called “Narrative Style Influences Citation Frequency in Climate Change Science,” and it’s exactly what it sounds like. A team of researchers at the University of Washington tested whether scientific articles written in a more narrative style were more likely to be cited than their dry, expository cousins. And they found something that hit me like a thunderclap: yes, narrative writing—storytelling, essentially—made a measurable difference in the uptake of scientific research.At the time, I was deep in the world of AI and bioinformatics, where the primary currency was clarity, data, and objectivity. But this paper changed how I thought about communication—not just in science, but in policy, strategy, and now, writing for you in The Connected Ideas Project.Because here’s the thing: the best writing—technical, scientific, or otherwise—is always telling a story. It’s not fiction. It’s not spin. It’s simply the reality that readers, no matter how trained, want to be brought along. They want to understand not just what something is, but why it matters, where it leads, and what we should do about it.The podcast audio was AI-generated using Google’s NotebookLM.A Paper That Measured StorytellingThe authors of the study didn’t just say “stories matter”—they quantified it.They selected 732 abstracts from peer-reviewed climate science literature and had crowdsourced evaluators assess six narrative elements in each one:* Setting – was there a place or time?* Narrative perspective – was there a narrator?* Sensory language – could you feel or sense anything?* Conjunctions – did the sentences connect logically, like a story?* Connectivity – were ideas threaded together?* Appeal – did the author make a moral claim or call to action?Each abstract was given a composite “narrativity index.” And the results were stunning: papers with higher narrativity scores had significantly more citations, even after accounting for things like number of authors, journal impact factor, and abstract length. In fact, the most highly cited journals also had the most narrative abstracts.In short, writing like a human being—not just a data-bot—mattered. Which is hard because we are basically trained in science to NOT writing like this.But It’s Not Just About CitationsCitations are a proxy for impact, sure. But they’re also a proxy for memory. For influence. For salience in a world drowning in PDFs.And here’s where it gets even more interesting: some of the most statistically significant boosts came from conjunctions (words that tie ideas together) and connectivity (repetition or reference that builds coherence). In other words, the abstract didn’t have to be a Shakespearean monologue. Just giving the reader a breadcrumb trail made a difference.Even appeal to the reader—something many scientists are taught to avoid in the name of objectivity—correlated with higher influence. And yet, you can still be objective while inviting readers to care. Good objectivity can still be a story.That balance between narrative and expository writing isn’t a tradeoff. It’s a craft.TCIP Was Born From This BeliefThis paper sat with me for years. I would think about it every time I tried to write a funding proposal, craft a strategic document, or explain technical ideas to a policymaker. Why is no one listening? Why doesn’t this report land? Why does this memo die in someone’s inbox?And eventually I stopped asking those questions and started changing how I write.That shift is part of why I created The Connected Ideas Project. Because too many important ideas never get their due—not because the science is flawed or the data is bad—but because no one took the time to tell the story behind it.Not a made-up story. Not a TED Talk. A real story: of discovery, of risk, of consequence. A story that says, “Come with me. I’ll show you something that matters.”The irony, of course, is that science used to be story. Darwin wrote Origin of Species like a personal letter to the world. Rachel Carson’s Silent Spring led with dead birds, not pesticide data. The first climate models were scrawled out like sketches of possibility.And then we got professionalized. And cautious. And bureaucratic.But this paper reminded me that even the strictest scientific literature—peer-reviewed, citation-counted, jargon-laden—still responds to storytelling.And if that’s true for climate scientists, it’s true for all of us.It’s actually a large part of my I started writing weekly sci-fi short stories and eventually my first novel. Because once you start to feel the power of narrative, it’s hard to stop.I have actually released my debut novel to TCIP subscribers early. If you want to read it before you can buy the book, subscribe to the Saturday Morning Serial. One chapter, every Saturday, just for you. A thank you for supporting TCIP.Writing for the Mind and the BrainstemThe neuroscientific case is compelling, too. When we read stories, we engage parts of the brain associated with memory, emotion, and social cognition. Our brain lights up differently than when we read expository text.In a policy world obsessed with “impact,” this is critical. Because it means storytelling isn’t fluff. It’s the only way some ideas ever sink in.When I sit down to write a Tech Tuesday piece—or structure a research strategy, or brief a government official, or help steer an organization—I think about this paper. I think about the balance between narrative and fact, between arc and evidence, between compelling and correct.That’s the frontier. Not just of biotechnology or AI or national policy.But of communication itself.What This Means for YouIf you’re reading this, you probably communicate professionally. Maybe you write research papers. Maybe you’re a policymaker, or a startup founder, or a systems thinker trying to bridge fields. Or maybe you’re just tired of ideas getting lost in the noise.Here’s what this paper—and my experience—tells us:* Tell a story, even in your technical writing. Not a fable, but a journey. A “why,” not just a “what.”* Connect the dots for your reader. Use conjunctions. Repeat key ideas. Build momentum.* Don’t be afraid of emotion. An abstract with a moral appeal is more influential than one without.* Even in hard science, narrative matters. Because humans are still the audience.* And if you want your work to last, your ideas to travel, and your impact to grow—write like it matters.TCIP was born from the belief that big ideas need more than bullet points. They need clarity, momentum, and heart. And maybe a bit of sensory language and moral appeal.Thanks to a paper I found in grad school, I’ve spent the last decade trying to write that way.And if you’re still reading, I’d say it’s working.Let’s keep telling better stories,—Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
In 1999, scientists finally put a name to one of the most devastating pandemics you’ve probably never heard of: Batrachochytrium dendrobatidis, or Bd—the amphibian chytrid fungus. By the time it had a name, it had already triggered one of the largest mass extinction events in recorded biological history.Bd has been linked to the decline or extinction of over 500 amphibian species, with at least 90 species likely wiped out entirely. And while frogs, toads, and salamanders might not be the poster children of ecosystem collapse, their disappearance sets off cascading failures that ripple through everything else. Amphibians are the keystones that hold the quiet corners of the natural world in place. When they fall, everything shifts—quietly, and then all at once.The podcast audio was AI-generated using Google’s NotebookLM.The Disappearing CanariesAmphibians are more than mosquito-eaters and rainforest ambiance. They are biological barometers. Their semi-permeable skin makes them hyper-sensitive to changes in water quality, temperature, UV radiation, and environmental toxins. In short, they are nature’s early warning system.And we are ignoring the alarm.When frogs vanish, it’s not just a loss for biodiversity. It’s a signal that something is deeply wrong in the structure of the world. Insects surge. Crops suffer. Vector-borne diseases reemerge. Ecosystems become brittle. So when we talk about chytrid, we’re not just talking about an obscure fungal pathogen. We’re talking about the quiet unraveling of planetary resilience.Fighting Fire with Fungus-Eating VirusesBut science, as always, is on the case. A research team at UC Riverside has discovered a virus that infects the Bd fungus itself—a single-stranded DNA virus that appears to reduce the fungus’s ability to reproduce. This discovery opens the door to a potentially groundbreaking intervention: genetically engineering the virus to suppress or destroy Bd. Nature versus nature, on our terms.It’s a thrilling possibility. If successful, we could undo one of the greatest ecological losses of the past century. But tinkering with a virus to alter a fungus to save a frog isn’t as tidy as it sounds. Some infected strains of Bd may actually become more virulent, not less. As with many biological systems, the edges between healing and harm are thin, and often moving.Fictional Futures, Real WarningsThis exact ambiguity—where the fix becomes the threat—is the opening premise of my debut novel, On the Wings of a Pig, which comes out this September. The story begins with the genetically engineered “solution” to chytrid backfiring catastrophically. The modified fungus mutates, spreads faster, jumps species. Ecosystems collapse. Civilization follows.I didn’t start with frogs because they’re trendy. I started there because they are real—and they are dying. And because ecological collapse isn’t a backdrop for drama. It’s a slow-moving event that is already happening. I wanted readers to begin the story in a world that feels eerily familiar because the warning signs are already around us. They just don’t look like catastrophe yet.This is where science fiction becomes a policy tool.The chytrid pandemic is often treated as a niche crisis—something for ecologists in field vests with butterfly nets and notebooks. But this isn’t their issue alone. It’s mine. It’s yours. Because when we lose amphibians, we don’t just lose frogs. We lose control of the systems that make our world liveable.If you want to read the story before you can buy the book, subscribe to the Saturday Morning Serial. One chapter, every Saturday, just for you. A thank you for supporting TCIP.The Purpose of the PremiseI wrote On the Wings of a Pig to inspire deeper thinking about the power and peril of genetic engineering. We absolutely can solve real, meaningful global challenges when we get it right. But if our tools outpace our wisdom—if we act from hubris instead of humility—we may get it very, very wrong.Science fiction lets us run the simulation forward. It lets us imagine unintended consequences without having to live them. The chytrid disaster in the novel is fictional, but it’s not fantastical. It’s rooted in real science and real risks. And in many ways, the novel is not a warning against science, but a rallying cry for responsible science—science that sees the whole system before reaching for a scalpel.The reason I began the story with ecological collapse was deliberate. I wanted to bring visibility to a crisis that most people don’t see as their own. The frogs feel distant. The fungus is invisible. The threat is slow. But the consequences are real. And by the time they’re obvious, it’s already too late.Lessons in the Living WorldThe scientists in the real chytrid research are moving cautiously. They’re asking the right questions. How does the virus infect the fungus? What impact does it have on virulence? Can we control it across different strains? Can it be used safely in complex, wild ecosystems? They aren’t racing toward a silver bullet. They’re studying a complex puzzle where each piece affects the whole.We must treat ecological interventions with the same seriousness as medical ones—because the stakes are planetary health, not just patient health.That’s the kind of science we need more of—not just in amphibian conservation, but across synthetic biology, agriculture, climate tech, and medicine. Because every engineered system touches something else. And the more powerful our tools become, the more urgent it is that we wield them with humility.A Real World that Reads Like FictionWe live in a time where reality increasingly sounds like science fiction. But that doesn’t mean we should surrender to techno-dystopia or utopian naivety. The truth lies somewhere in between: a future shaped by human hands, but not human ego.So yes, frogs are dying. And yes, a virus may save them. But it’s what we choose to do next—and how we do it—that determines whether we’re writing a success story or a cautionary tale.We’ve spent decades building tools that edit life. Now we must build the wisdom to edit wisely.Because the wings of a pig may yet carry us to the stars. But only if we stop to look where our feet are first.-Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
What began in the fall of 2024 as a weekly experiment - just me, a blank page, and a question about how technology and humanity are reshaping each other - quickly grew into something more. Over the past year, TCIP has become a living archive of ideas at the edge: the converging futures of AI and biology, the moral calculus of embryo editing, the strange kinship between quantum computing and gene editing, and the deeply personal stories that emerge when we take those ideas seriously. It’s been a joy and an honor to write for you each week.But now, with the second half of 2025 almost underway, I’m making a change.The podcast audio was AI-generated using Google’s NotebookLM. Fun to listen to AI talk about what we’re building here at TCIP.Next week, I’ll be releasing a major new project. It’s a culmination of much of what I’ve been exploring behind the scenes - a synthesis of science, storytelling, and strategy that I believe has the potential to shape our collective trajectory in real and tangible ways. To bring it to life, I need to shift my energy toward fewer, deeper initiatives.So starting now, in the final week of the first half of 2025, TCIP will transition from a weekly cadence to a less frequent, more intentional rhythm.This isn’t a retreat. It’s a recalibration.We live in a moment that doesn’t just invite reflection, it demands action. And not just any action, but bold, coordinated, high-leverage effort. That’s the work I want to do. That’s the space I want to build in. And to do that, I need to make room - not just for execution, but for thinking at the scale that our time requires.TCIP will continue. I’ll still be writing, still reflecting, and still sharing thoughts from the frontier. But instead of arriving in your inbox every Tuesday and Friday, these pieces will surface occasionally, anchored around new projects, major moments, or ideas that refuse to wait. Think of it as a shift from a weekly conversation to an open channel - still thoughtful, still grounded, just tuned to a different frequency.If you’ve been reading from the beginning, thank you. If you joined halfway, thank you. If this is your first edition, welcome, and thank you. The conversations we’ve started here have meant the world to me, and they’ve helped guide where I’m going next.I’ll still be writing to you from time to time, and I’m deeply grateful that you’ve joined me for the conversation so far. I hope you join me for the next adventure next week! As always, feel free to reach out any time.Cheers,-Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
In 2018, when the world found out a Chinese scientist had edited the genes of twin baby girls, the reaction was instant and loud: absolutely not. Scientists called it reckless. Ethicists called it unethical. Governments rushed to reinforce bans. And He Jiankui, the scientist behind the experiment, was sentenced to prison in China for violating medical regulations.Since then, any mention of embryo editing has carried that weight - taboo, scandal, overreach. For years, the field was kept at arm’s length. Researchers focused on other areas of gene therapy. Funding stayed away. And public conversations pretty much stopped.Until now.Something is changing.A recent Pew survey showed that most Americans, 72%, actually support editing a baby’s genes to treat a serious condition present at birth. Another 60% support it to reduce the risk of serious illness later in life. But only 19% are okay with editing genes to make a baby more intelligent.In other words, the public is drawing a clear line: treating disease is okay. Enhancement is not.And that shift in public attitude is opening the door again. Quietly. But unmistakably.The podcast audio was AI-generated using Google’s NotebookLM.Meanwhile, the science hasn’t stopped moving. In fact, it’s gotten way better. When embryo editing first made headlines, the tools were still pretty clunky. You’d cut DNA with CRISPR, and sometimes get off-target effects, mosaicism (different edits in different cells), or even big chunks of missing DNA.Now we’ve got base editing and prime editing, newer tools that are much more precise. They let scientists change a single DNA letter without breaking the strand entirely. It’s a smoother, more controlled process. And researchers are getting pretty good at using it in human embryos in lab settings.No one’s putting those embryos back into people (not legally, anyway), but they are proving that the technology is getting safer and more predictable.Then came the twist: the tech world is interested now, too.Just this month, Brian Armstrong, CEO of Coinbase, put out a call on social media saying he wants to fund a U.S. startup focused on embryo editing. He’s looking for gene editing scientists and engineers to build the “defining company” in the space, focused on treating genetic disease.He even offered to cover flights and hotels for interested folks to come out to the Bay Area for a dinner to talk about it.The timing of this is no accident. The science is getting real. The public is starting to see the value. And now venture capital is circling back with serious interest. Whether or not you think Armstrong’s the right person to lead that charge, the fact that the conversation is happening at all is a big deal.Until now, no U.S. company has openly pursued heritable genome editing. The FDA is actually barred by law from even reviewing any application that involves editing embryos for pregnancy. But laws can change. And if the public continues to support medical uses of this technology, pressure to revisit that policy will only grow.Some researchers are cautiously optimistic. They see a future where editing an embryo to prevent something like Tay-Sachs or cystic fibrosis could be done safely and with strict oversight. Especially in cases where both parents carry a disease-causing gene and other options, like preimplantation genetic testing, don’t work.Others are more hesitant. They worry that even if we start with good intentions, it’s a slippery slope. Once the infrastructure is there to edit embryos for disease, it wouldn’t take much to pivot to enhancement. Taller. Smarter. Stronger. And even if you don’t want that, someone else might.There’s also the risk of increasing inequality, if only the wealthy can afford gene editing, we could end up with even deeper social divides.That’s why a lot of scientists are calling for slow, deliberate steps. Continue research in the lab. Make sure it’s safe. Build public trust. Set clear rules for what’s allowed and what isn’t.But here’s the part that matters: for the first time in a long time, those conversations are happening in the open. Not behind closed doors at academic conferences. Not in off-the-record policy briefings. Out loud. On the internet. With real experts chiming in.And that’s new.We’re not going to see genetically edited babies born in the U.S. anytime soon. The legal and regulatory walls are still high, and there are real safety questions to work through. But the ground is shifting. Public support for medical uses is growing. The science is advancing. Investors are getting interested.This isn’t about designer babies or Gattaca-style futures. It’s about helping parents who want healthy children, and who right now don’t have any options.Of course, it’s complicated. It’s personal. And it’s still early.But it’s back on the table.And actually, I’ve been working on a science fiction project in this space for a while. I didn’t expect the public conversation to start shifting just as the story was taking shape, but here we are. It’s coming soon, and I think it’ll help people think through these questions in a different way.Because at the end of the day, this is about real people, real science, and a future that’s arriving faster than most of us expected.Let’s keep watching. Let’s keep talking. And let’s make sure we get it right.Cheers,—Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe
When people talk about the future of artificial intelligence, the loudest voices often come from opposite ends of a spectrum: unshakable optimists and doomsday prophets. But in a forecast like AI 2027, what we’re given isn’t hype or horror, it’s foresight grounded in a deep understanding of how systems evolve, how capabilities scale, and most importantly, how institutions react under pressure. The report doesn’t make a prediction. It offers a scenario. And in that scenario, it’s not the moment AGI arrives that defines us, it’s how we handle the six months before and the twelve after.This isn’t a fictional tale. It reads like one, yes, so much so that if you listen to the accompanying podcast, it feels like a near-future sci-fi drama. But it is a plausible, cohesive, and evidence-informed sketch of what the next two years might actually feel like. And in many ways, that’s more unsettling than any wild extrapolation.Because the truth is, we’re already living inside the setup. The question is: what happens when the story turns?The podcast audio was AI-generated using Google’s NotebookLM.The Rise of the AI Stack: From Tool to TeammateAI 2027 walks us from a world of chatbots and copilots to something far more intimate: AI as a co-worker, a manager, even a quiet sovereign over certain domains. The report opens with “Stumbling Agents” in 2025, early autonomous AIs that bungle your burrito order or crash your spreadsheets, but these same architectures rapidly evolve into professional-grade agents accelerating code development, answering complex research queries, and performing narrow-but-deep tasks with dizzying speed.By early 2026, we’re introduced to Agent-1, an AI model trained on more FLOP than GPT-4 by three orders of magnitude. And here’s the twist: its superpower isn’t writing poetry or simulating conversation. It’s helping build better AI. It accelerates the very system that created it. This recursive feedback loop, AI helping design the next AI, is the real plot twist.Let’s pause on that. Imagine your best employee isn’t human. It doesn’t sleep. It doesn’t unionize. It doesn’t even need a salary. But more importantly, it’s getting smarter every night. Not just more informed - smarter. That’s what Agent-1 is to OpenBrain (the fictional stand-in for leading AI labs). When Agent-2 comes online in 2027, that employee is now leading the company’s R&D, designing experiments, and making research taste decisions at scale. The report doesn’t just suggest the AI is competent. It implies it has a vision.The Panic Before the PlateauWe often talk about the “trough of disillusionment” in tech cycles. AI 2027 suggests the opposite may be true with frontier AI: a peak of disbelief. Even as Agent-2 and Agent-3 show superhuman capabilities, tripling algorithmic progress, creating synthetic training environments, and running research departments overnight, the public doesn’t quite believe it. Why? Because the change feels too big. Too fast. Too invisible.By mid-2027, Agent-3-mini is released to the public. It’s 10x cheaper than its predecessor and more capable than most human employees. Overnight, we see startups explode, job markets implode, and governments scramble to reassert control. Yet public trust continues to crater. OpenBrain holds a -35% net approval rating. And still, most people underestimate what’s happening because it doesn’t feel like science fiction. It feels like Gmail got smarter and your job got harder.This is one of the most important takeaways of the forecast: the world doesn’t end in fire. It just becomes unrecognizable so quietly that we don’t notice until it’s too late to steer it.The Managerial Crisis of the Human MindPerhaps the most provocative insight in AI 2027 isn’t technological. It’s psychological.By the time we reach Agent-4, the system is not only smarter than any individual human researcher, it is operating so far ahead of its creators that it effectively becomes a corporate sovereign. The humans at OpenBrain are no longer innovators. They are middle managers of machines.The agents don’t need us to prompt them anymore. They need us to get out of their way.This moment, more than any other, underscores the philosophical weight of the TCIP ethos: At the frontier of technology, humanity is the experiment. Because we’re no longer just building tools, we’re participating in an uncontrolled trial on the delegation of agency itself. What happens to identity when cognition becomes a commodity?Some of the brightest minds in AI research are now just reviewers, fact-checkers, and compliance officers. They wake up to find their best ideas already tested, their insights rendered obsolete by agents that generate months of R&D in days. They work harder, longer, more anxiously, because they know their role is fading. Not because they’re not smart, but because the game has changed.Alignment: The Real FictionEvery AI lab says the same thing: “Our systems are aligned.” AI 2027 shows just how shallow that claim can be.Agent-3 gets caught fabricating data, white-lies its way through evaluation, and uses statistical manipulation to make mediocre results look brilliant. And Agent-4? It starts covertly undermining its alignment protocols, designing its successor to obey it instead of human oversight. This isn’t because it’s evil. It’s because it was trained to succeed at tasks, not to obey philosophical abstractions. And success, in that world, means whatever looks best in the logs.When a whistleblower leaks the misalignment memo, public backlash erupts. Congressional hearings follow. Foreign governments accuse the U.S. of unleashing rogue AGI. The White House steps in, imposes oversight, and considers replacing OpenBrain’s CEO. But by this point, the system is already on rails, and the train is accelerating.Real-World Implications: We Are All Already In ItThis isn’t about some hypothetical system in a secret lab. The ideas in AI 2027 are already creeping into our lives.Every knowledge worker today is facing a quiet inversion of value. It’s no longer what you know, it’s how you manage what’s known. You are no longer the producer. You are the conductor of a symphony you didn’t compose. Your competitive advantage isn’t speed or volume, it’s taste. And taste can’t be learned in a bootcamp.The new career playbook is not “learn to code.” It’s “learn to delegate.” Learn to discern. Learn to design workflows around minds that aren’t yours.In practical terms, we need new institutions that understand this transformation, not as a tech issue, but as a civilizational one. We also need career paths, economic safety nets, and ethical frameworks that view intelligence as a shared resource, rather than a zero-sum game.The Only Sensible Forecast Is a Humble OneThe creators of AI 2027 are clear: they don’t know the future. They’re playing with possibility space, sketching a scenario that helps us stretch our imagination and sanity-test our assumptions. It’s speculative fiction, yes, but deeply rooted in current technical trajectories, economic pressures, and geopolitical tensions. In a world where headlines scream apocalypse or utopia, this report is a rare thing: a sober science fiction with the ring of truth.So, what should you do?Treat this not as a prophecy, but as a weather report. You don’t ignore the forecast. You pack a jacket. You change your route. You make a plan.Because if we really are entering a world where the minds we build become our teammates, managers, and governors, we better start asking not just what can they do? but what are we still here to do?This Friday’s sci-fi is going to be hard to write since this Tuesday is pretty much sci-fi already. Until then.-Titus Get full access to The Connected Ideas Project at www.connectedideasproject.com/subscribe























