DiscoverThe Nonlinear Library: Alignment Forum
The Nonlinear Library: Alignment Forum
Claim Ownership

The Nonlinear Library: Alignment Forum

Author: The Nonlinear Fund

Subscribed: 2Played: 601
Share

Description

The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
761 Episodes
Reverse
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: EAI Alignment Speaker Series #1: Challenges for Safe & Beneficial Brain-Like Artificial General Intelligence with Steve Byrnes, published by Curtis Huebner on March 23, 2023 on The AI Alignment Forum. A couple months ago EleutherAI started an alignment speaker series, some of these talks have been recorded. This is the first instalment in the series. The following is a transcript generated with the help of Conjecture's Verbalize and some light editing: Getting started 1 CURTIS00:00:22,775 --> 00:00:56,683Okay, I've started the recording. I think we can give it maybe a minute or two more and then I guess we can get started. I've also got the chat window as part of the recording. So if anyone has something they want to write out, feel free to put that in. Steve, you want to do questions throughout the talk, or should we wait till the end of the talk before we ask questions? 2 STEVE00:00:59,405 --> 00:01:09,452Let's do throughout, but I reserve the right to put people off if something seems tangential or something. 3 CURTIS00:01:10,200 --> 00:01:12,101Awesome. All right, cool. Let's go with that then. 10 STEVE00:02:02,246 --> 00:21:41,951 The talk All right. Thanks, everybody, for coming. This is going to be based on blog posts called Intro to Brain-Like AGI Safety. If you've read all of them, you'll find this kind of redundant, but you're still welcome to stay. My name is Steve Byrnes and I live in the Boston area. I'm employed remotely by Astera Institute, which is based in Berkeley. I'm going to talk about challenges for safe and beneficial brain-like Artificial General Intelligence for the next 35 minutes. Feel free to jump in with questions. Don't worry, I'm funded by an entirely different crypto billionaire. .That joke was very fresh when I wrote it three months ago. I need a new one now. Okay, so I'll start with—well, we don't have to talk about the outline. You'll see as we go. General motivation Start with general motivation. Again, I'm assuming that the audience has a range of backgrounds, and some of you will find parts of this talk redundant. The big question that I'm working on is: What happens when people figure out how to run brain-like algorithms on computer chips? I guess I should say “if and when”, but we can get back to that. And I find that when I bring this up to people, they they tend to have two sorts of reactions: One is that we should think of these future algorithms as “like tools for people to use”. And the other is that we should think of them as “like a new intelligent species on the planet”. So let's go through those one by one. Let’s start with the tool perspective. This is the perspective that would be more familiar to AI people. If we put brain-like algorithms on computer chips, then that would be a form of artificial intelligence. And everybody knows that AI today is a tool for people to use. So on this perspective, the sub-problem I'm working on is accident prevention. We want to avoid the scenarios where the AI does something that nobody wanted it to do—not the people who programmed it, not anybody. So there is a technical problem to solve there, which is: If people figure out how to run brain-like algorithms on computer chips, and they want those algorithms to be trying to do X—where X is solar cell research or being honest or whatever you can think of—then what source code should they write? What training environment should they use? And so on. This is an unsolved problem. It turns out to be surprisingly tricky, for some pretty deep reasons that mostly are not going to be in the scope of this talk, but you can read the series. This slide is the bigger picture of that. So if we want our awesome post-AGI future, then we want to avoid, y'know, catastrophic accidents where the AI gets out of control and self-replicates around the Intern...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: The space of systems and the space of maps, published by Jan Kulveit on March 22, 2023 on The AI Alignment Forum. When we're trying to do AI alignment, we're often studying systems which don't yet exist. This is a pretty weird epistemic activity, and seems really hard to get right. This post offers one frame for thinking about what we're actually doing when we're thinking about AI alignment: using parts of the space of maps to reason about parts of the space of intelligent systems. In this post, we: Introduce a simple model of the epistemic situation, and Share some desiderata for maps useful for alignment. We hope that the content is mostly the second kind of obvious: obvious once you see things in this way, which you maybe already do. In our experience, this comes with a risk: reading too fast, you may miss most of the nuance and useful insight the deceptively simple model brings, or come away with a version of the model which is rounded off to something less useful (i.e. "yeah, there is this map and territory distinction"). As a meta recommendation, we suggest reading this post slowly, and ideally immediately trying to apply the model to some confusion or disagreement about AI alignment. The space of systems and the space of maps Imagine the space of possible intelligent systems: Two things seem especially important about this space: It’s very large; much larger than the space of current systems. We don’t get direct epistemic access to it. This is obviously true of systems which don’t currently exist. In a weaker sense, it also seems true of systems which do exist. Even when we get to directly interact with a system: Our thinking about these parts of the space is still filtered through our past experiences, priors, predictive models, cultural biases, theories. We often don’t understand the emergent complexity of the systems in question. If we don’t get direct epistemic access to the space of systems, what are we doing when we reason about it? Let’s imagine a second space, this time a space of “maps”: The space of maps is an abstract representation of all the possible “maps” that can be constructed about the space of intelligent systems. The maps are ways of thinking about (parts of) the space of systems. For example: Replicable descriptions of how a machine learning model works and was trained are a way of thinking about that model (a point in the space of intelligent systems). An ethnographic study of a particular human community is a way of thinking about that community (another point in the space of systems). The theory of evolution is a way of thinking about evolved creatures, including intelligent ones. Expected utility theory is a way of thinking about some part of the space which may or may not include future AI systems. Historical analysis of trends in technological development is a way of thinking about whichever parts of the space of intelligent systems are governed by similar dynamics to those governing past technological developments. When we’re reasoning about intelligent systems, we’re using some part of the space of maps to think about some part of the space of intelligent systems: Different maps correspond to different regions of the space of intelligent systems. Of course, thinking in terms of the space of systems and the space of maps is a simplification. Some of the ways that reality is more complicated: The space of systems looks different on different maps. Maps can affect which parts of the space of systems actually get developed. Maps are themselves embedded in the space of systems. Which maps and systems actually exist at a given time is evolving and dynamic. AI will play a big role in both the space of maps and the space of systems. We think that the space of systems and the space of maps is a useful simplification which helps us to think ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Truth and Advantage: Response to a draft of "AI safety seems hard to measure", published by Nate Soares on March 22, 2023 on The AI Alignment Forum. Status: This was a response to a draft of Holden's cold take "AI safety seems hard to measure". It sparked a further discussion, that Holden recently posted a summary of. The follow-up discussion ended up focusing on some issues in AI alignment that I think are underserved, which Holden said were kinda orthogonal to the point he was trying to make, and which didn't show up much in the final draft. I nevertheless think my notes were a fine attempt at articulating some open problems I see, from a different angle than usual. (Though it does have some overlap with the points made in Deep Deceptiveness, which I was also drafting at the time.) I'm posting the document I wrote to Holden with only minimal editing, because it's been a few months and I apparently won't produce anything better. (I acknowledge that it's annoying to post a response to an old draft of a thing when nobody can see the old draft, sorry.) Quick take: (1) it's a write-up of a handful of difficulties that I think are real, in a way that I expect to be palatable to a relevant different audience than the one I appeal to; huzzah for that. (2) It's missing some stuff that I think is pretty important. Slow take: Attempting to gesture at some of the missing stuff: a big reason deception is tricky is that it is a fact about the world rather than the AI that it can better-achieve various local-objectives by deceiving the operators. To make the AI be non-deceptive, you have three options: (a) make this fact be false; (b) make the AI fail to notice this truth; (c) prevent the AI from taking advantage of this truth. The problem with (a) is that it's alignment-complete, in the strong/hard sense. The problem with (b) is that lies are contagious, whereas truths are all tangled together. Half of intelligence is the art of teasing out truths from cryptic hints. The problem with (c) is that the other half of intelligence is in teasing out advantages from cryptic hints. Like, suppose you're trying to get an AI to not notice that the world is round. When it's pretty dumb, this is easy, you just feed it a bunch of flat-earther rants or whatever. But the more it learns, and the deeper its models go, the harder it is to maintain the charade. Eventually it's, like, catching glimpses of the shadows in both Alexandria and Syene, and deducing from trigonometry not only the roundness of the Earth but its circumference (a la Eratosthenes). And it's not willfully spiting your efforts. The AI doesn't hate you. It's just bumping around trying to figure out which universe it lives in, and using general techniques (like trigonometry) to glimpse new truths. And you can't train against trigonometry or the learning-processes that yield it, because that would ruin the AI's capabilities. You might say "but the AI was built by smooth gradient descent; surely at some point before it was highly confident that the earth is round, it was slightly confident that the earth was round, and we can catch the precursor-beliefs and train against those". But nope! There were precursors, sure, but the precursors were stuff like "fumblingly developing trigonometry" and "fumblingly developing an understanding of shadows" and "fumblingly developing a map that includes Alexandria and Syene" and "fumblingly developing the ability to combine tools across domains", and once it has all those pieces, the combination that reveals the truth is allowed to happen all-at-once. The smoothness doesn't have to occur along the most convenient dimension. And if you block any one path to the insight that the earth is round, in a way that somehow fails to cripple it, then it will find another path later, because truths are interw...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: the QACI alignment plan: table of contents, published by Tamsin Leake on March 21, 2023 on The AI Alignment Forum. this post aims to keep track of posts relating to the question-answer counterfactual interval proposal for AI alignment, abbreviated "QACI" and pronounced "quashy". i'll keep it updated to reflect the state of the research. this research is primarily published on the Orthogonal website and discussed on the Orthogonal discord. as an introduction to QACI, you might want to start with: a narrative explanation of the QACI alignment plan (7 min read) QACI blobs and interval illustrated (3 min read) state of my research agenda (3 min read) the set of all posts relevant to QACI totals to 74 min of reading, and includes: as overviews of QACI and how it's going: state of my research agenda (3 min read) problems for formal alignment (2 min read) the original post introducing QACI (5 min read) on the formal alignment perspective within which it fits: formal alignment: what it is, and some proposals (2 min read) clarifying formal alignment implementation (1 min read) on being only polynomial capabilities away from alignment (1 min read) on implementating capabilities and inner alignment, see also: making it more tractable (4 min read) RSI, LLM, AGI, DSA, imo (7 min read) formal goal maximizing AI (2 min read) you can't simulate the universe from the beginning? (1 min read) on the blob location problem: QACI blobs and interval illustrated (3 min read) counterfactual computations in world models (3 min read) QACI: the problem of blob location, causality, and counterfactuals (3 min read) QACI blob location: no causality & answer signature (2 min read) QACI blob location: an issue with firstness (2 min read) on QACI as an implementation of long reflection / CEV: CEV can be coherent enough (1 min read) some thoughts about terminal alignment (2 min read) on formalizing the QACI formal goal: a rough sketch of formal aligned AI using QACI with some actual math (4 min read) one-shot AI, delegating embedded agency and decision theory, and one-shot QACI (3 min read) on how a formally aligned AI would actually run over time: AI alignment curves (2 min read) before the sharp left turn: what wins first? (1 min read) on the metaethics grounding QACI: surprise! you want what you want (1 min read) outer alignment: two failure modes and past-user satisfaction (2 min read) your terminal values are complex and not objective (3 min read) on my view of the AI alignment research field within which i'm doing formal alignment: my current outlook on AI risk mitigation (14 min read) a casual intro to AI doom and alignment (5 min read) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Some constructions for proof-based cooperation without Löb, published by James Payor on March 21, 2023 on The AI Alignment Forum. This post presents five closely-related ways to achieve proof-based cooperation without using Löb's theorem, and muses on legible cooperation in the real world. (Edit: maybe they're closer to just-use-Löb's-theorem than I originally thought! See this comment. If these constructions somehow work better, I'm more confused than before about why.) I'm writing this as a follow-up to Andrew Critch's recent post, to share more of my perspective on the subject. We're going to dive straight into the weeds. (I'm planning to also write a more accessible explainer post soon.) The ideas Idea #1: try to prove AB I claim the following are sufficient for robust cooperation: A↔□(AB) B□A A tries to prove that AB, and B tries to prove A. The reason this works is that B can prove that A□A, i.e. A only cooperates in ways legible to B. (Proof sketch: A↔□X□□X↔□A.) The flaw in this approach is that we needed to know that A won't cooperate for illegible reasons. Otherwise we can't verify that B will cooperate whenever A does. This indicates to me that "AB" isn't the right "counterfactual". It shouldn't matter if A could cooperate for illegible reasons, if A is actually cooperating for a legible one. Idea #2: try to prove □AB We can weaken the requirements with a simple change: A□(□AB) B□A Note that this form is close to the lemma discussed in Critch's post. In this case, the condition □AB is trivial. And when the condition activates, it also ensures that □A is true, which discharges our assumption and ensures B is true. I still have the sense that the condition for cooperation should talk about itself activating, not A. Because we want it to activate when that is sufficient for cooperaion. But I do have to admit that □AB works for mostly the right reasons, comes with a simple proof, and is the cleanest two-agent construction I know. Idea #3: factor out the loop-cutting gadget We can factor the part that is trying to cut the loop out from A, like so: A□X B□A X↔□(XB); or alternatively X↔□(□XB) This gives the loop-cutting logic a name, X. Now X can refer to itself, and roughly says "I'll legibly activate if I can verify this will cause B to be true". The key properties of X are that □X□B, and $\Box (\Box X \rightarrow \Box B) Like with idea #2, we just need A to reveal a mechanism by which it can be compelled to cooperate. Idea #4: everyone tries to prove □methem What about three people trying to cooperate? We can try applying lots of idea #2: A□(□AB∧C) B□(□BA∧C) C□(□CA∧B) And, this works! Proof sketch: Under the assumption of □C: A□(□AB∧C)□(□AB) B□(□BA∧C)□(□BA) A and B form a size-2 group, which cooperates by inductive hypothesis □CA∧B, since we proved A and B under the assumption C and □C follow from (2) A and B also follow, from (2) and (3) The proof simplifies the group one person at a time, since each person is asking "what would happen if everyone else could tell I cooperate". This lets us prove the whole thing by induction. It's neat that it works, though it's not the easiest thing to see. Idea #5: the group agrees to a shared mechanism or leader What if we factor out the choosing logic in a larger group? Here's one way to do it: A□X B□X C□X X↔□(□XA∧B∧C) This is the cleanest idea I know for handling the group case. The group members agree on some trusted leader or process X. They set things up so X activates legibly, verifies things in a way trusted by everyone, and only activates when it verifies this will cause cooperation. We've now localized the choice-making in one place. X proves that □XA∧B∧C, X activates, and everyone cooperates. Closing remarks on groups in the real world Centralizing the choosing like in idea #5 make the logic simpler, but this sort o...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Clarifying mesa-optimization, published by Marius Hobbhahn on March 21, 2023 on The AI Alignment Forum. Produced as part of the SERI ML Alignment Theory Scholars Program - Winter 2022 Cohort. Thanks to Jérémy Scheurer, Nicholas Dupuis and Evan Hubinger for feedback and discussion When people talk about mesa-optimization, they sometimes say things like “we’re searching for the optimizer module” or “we’re doing interpretability to find out whether the network can do internal search”. An uncharitable interpretation of these claims is that the researchers expect the network to have something like an “optimization module” or “internal search algorithm” that is clearly different and distinguishable from the rest of the network (to be clear, we think it is fine to start with probably wrong mechanistic models). In this post, we want to argue why we should not expect mesa-optimization to be modular or clearly different from the rest of the network (at least in transformers and CNNs) and that current architectures can already do mesa-optimization in a meaningful way. We think this implies that Mesa-optimization improves gradually where more powerful models likely develop more powerful mesa optimizers. Mesa-optimization should not be treated as a phenomenon of the future. Current models likely already do it, just in a very messy and distributed fashion. When we look for mesa optimization, we probably have to look for a messy stack of heuristics combined with search-like abilities rather than clean Monte Carlo Tree Search (MCTS)-like structures. We think most of our core points can be conveyed in a simple analogy. Imagine a human chess grandmaster that has to choose their moves in 1 second. In this second, they are probably not running a sophisticated tree search in their head, they rely on heuristics. These heuristics were shaped by years of playing the game and are often the result of doing explicit tree searches with more time. The resulting decision-making process is a heuristic that approximates or was at least shaped by optimization but is not an optimizer itself. This is approximately what we think mesa-optimization might look like in current neural networks, i.e. the model uses heuristics that have aspects of or approximate parts of optimization, but are not “clean” in the way e.g. MCTS is. What is an accurate definition of mesa-optimization? In risks from learned optimization mesa-optimization is characterized as [...] it is also possible for a neural network to itself run an optimization algorithm. For example, a neural network could run a planning algorithm that predicts the outcomes of potential plans and searches for those it predicts will result in some desired outcome. Such a neural network would itself be an optimizer because it would be searching through the space of possible plans according to some objective function. If such a neural network were produced in training, there would be two optimizers: the learning algorithm that produced the neural network—which we will call the base optimizer—and the neural network itself—which we will call the mesa-optimizer. In this definition, the question of whether a network performs mesa-optimization or not boils down to whether whatever it does can be categorized as optimization, planning or search with an objective function. We think this question is very hard to answer for most networks and ML applications in general, e.g. one could argue that sparse linear regression performs search according to some objective function and that the attention layer of a transformer implements search since it scans over many inputs and reweighs them. We think this is an unhelpful way to think about transformers but it might technically fulfill the criterion. On the other hand, transformers very likely can’t perform variable length optimi...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Deep Deceptiveness, published by Nate Soares on March 21, 2023 on The AI Alignment Forum. Meta This post is an attempt to gesture at a class of AI notkilleveryoneism (alignment) problem that seems to me to go largely unrecognized. E.g., it isn’t discussed (or at least I don't recognize it) in the recent plans written up by OpenAI (1,2), by DeepMind’s alignment team, or by Anthropic, and I know of no other acknowledgment of this issue by major labs. You could think of this as a fragment of my answer to “Where do plans like OpenAI’s ‘Our Approach to Alignment Research’ fail?”, as discussed in Rob and Eliezer’s challenge for AGI organizations and readers. Note that it would only be a fragment of the reply; there's a lot more to say about why AI alignment is a particularly tricky task to task an AI with. (Some of which Eliezer gestures at in a follow-up to his interview on Bankless.) Caveat: I'll be talking a bunch about “deception” in this post because this post was generated as a result of conversations I had with alignment researchers at big labs who seemed to me to be suggesting "just train AI to not be deceptive; there's a decent chance that works". I have a vague impression that others in the community think that deception in particular is much more central than I think it is, so I want to warn against that interpretation here: I think deception is an important problem, but its main importance is as an example of some broader issues in alignment. Summary Attempt at a short version, with the caveat that I think it's apparently a sazen of sorts, and spoiler tagged for people who want the opportunity to connect the dots themselves: Deceptiveness is not a simple property of thoughts. The reason the AI is deceiving you is not that it has some "deception" property, it's that (barring some great alignment feat) it's a fact about the world rather than the AI that deceiving you forwards its objectives, and you've built a general engine that's good at taking advantage of advantageous facts in general. As the AI learns more general and flexible cognitive moves, those cognitive moves (insofar as they are useful) will tend to recombine in ways that exploit this fact-about-reality, despite how none of the individual abstract moves look deceptive in isolation. Investigating a made-up but moderately concrete story Suppose you have a nascent AGI, and you've been training against all hints of deceptiveness. What goes wrong? When I ask this question of people who are optimistic that we can just "train AIs not to be deceptive", there are a few answers that seem well-known. Perhaps you lack the interpretability tools to correctly identify the precursors of 'deception', so that you can only train against visibly deceptive AI outputs instead of AI thoughts about how to plan deceptions. Or perhaps training against interpreted deceptive thoughts also trains against your interpretability tools, and your AI becomes illegibly deceptive rather than non-deceptive. And these are both real obstacles. But there are deeper obstacles, that seem to me more central, and that I haven't observed others to notice on their own. That's a challenge, and while you (hopefully) chew on it, I'll tell an implausibly-detailed story to exemplify a deeper obstacle. A fledgeling AI is being deployed towards building something like a bacterium, but with a diamondoid shell. The diamondoid-shelled bacterium is not intended to be pivotal, but it's a supposedly laboratory-verifiable step on a path towards carrying out some speculative human-brain-enhancement operations, which the operators are hoping will be pivotal. (The original hope was to have the AI assist human engineers, but the first versions that were able to do the hard parts of engineering work at all were able to go much farther on their own, and the competit...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My Objections to "We’re All Gonna Die with Eliezer Yudkowsky", published by Quintin Pope on March 21, 2023 on The AI Alignment Forum. Introduction I recently watched Eliezer Yudkowsky's appearance on the Bankless podcast, where he argued that AI was nigh-certain to end humanity. Since the podcast, some commentators have offered pushback against the doom conclusion. However, one sentiment I saw was that optimists tended not to engage with the specific arguments pessimists like Yudkowsky offered. Economist Robin Hanson points out that this pattern is very common for small groups which hold counterintuitive beliefs: insiders develop their own internal language, which skeptical outsiders usually don't bother to learn. Outsiders then make objections that focus on broad arguments against the belief's plausibility, rather than objections that focus on specific insider arguments. As an AI "alignment insider" whose current estimate of doom is around 5%, I wrote this post to explain some of my many objections to Yudkowsky's specific arguments. I've split this post into chronologically ordered segments of the podcast in which Yudkowsky makes one or more claims with which I particularly disagree. I have my own view of alignment research: shard theory, which focuses on understanding how human values form, and on how we might guide a similar process of value formation in AI systems. I think that human value formation is not that complex, and does not rely on principles very different from those which underlie the current deep learning paradigm. Most of the arguments you're about to see from me are less: I think I know of a fundamentally new paradigm that can fix the issues Yudkowsky is pointing at. and more: Here's why I don't agree with Yudkowsky's arguments that alignment is impossible in the current paradigm. My objections Will current approaches scale to AGI? Yudkowsky apparently thinks not ...and that the techniques driving current state of the art advances, by which I think he means the mix of generative pretraining + small amounts of reinforcement learning such as with ChatGPT, aren't reliable enough for significant economic contributions. However, he also thinks that the current influx of money might stumble upon something that does work really well, which will end the world shortly thereafter. I'm a lot more bullish on the current paradigm. People have tried lots and lots of approaches to getting good performance out of computers, including lots of "scary seeming" approaches such as: Meta-learning over training processes. I.e., using gradient descent over learning curves, directly optimizing neural networks to learn more quickly. Teaching neural networks to directly modify themselves by giving them edit access to their own weights. Training learned optimizers - neural networks that learn to optimize other neural networks - and having those learned optimizers optimize themselves. Using program search to find more efficient optimizers. Using simulated evolution to find more efficient architectures. Using efficient second-order corrections to gradient descent's approximate optimization process. Tried applying biologically plausible optimization algorithms inspired by biological neurons to training neural networks. Adding learned internal optimizers (different from the ones hypothesized in Risks from Learned Optimization) as neural network layers. Having language models rewrite their own training data, and improve the quality of that training data, to make themselves better at a given task. Having language models devise their own programming curriculum, and learn to program better with self-driven practice. Mixing reinforcement learning with model-driven, recursive re-writing of future training data. Mostly, these don't work very well. The current capabilities paradigm is sta...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Probabilistic Payor Lemma?, published by Abram Demski on March 19, 2023 on The AI Alignment Forum. Epistemic status: too good to be true? Please check my math. We've known for a while that Löb's theorem fails when proof is relaxed to probabilistic belief. This has pros and cons. On the pro side, it means there's no Löbian Obstacle to probabilistic self-trust. On the con side, it means that some Löb-derived insights for proof-based decision theory don't translate to probabilistic decision theory, at least not as directly as one might hope. In particular, it appeared to dash hopes for probabilistic generalizations of the "Löbian handshake" for cooperation. Recently, Andrew Critch wrote about the Payor Lemma, which allows for a very similar "modal handshake" without Löb's Theorem. The lemma was proved using the same modal assumptions as Löb's, so on the surface it may appear to be just a different method to achieve similar results, whose main advantage is that it is much easier to prove (and therefore explain and understand) than Löb's Theorem. But, a natural question arises: does Payor's Lemma have a suitable probabilistic version? I'll give an affirmative proof; but I haven't confirmed that the assumptions are reasonable to my satisfaction. Setup Let L be a language in first-order logic, expressive enough to represent its sentences s∈L as quoted terms ┌s┐, eg, through Gödel numbering; and with a probability function symbol on these terms, p(┌s┐), which can be equated with (some representation of) rational numbers, e.g. p(┌⊤┐)=1, p(┌s┐)=12, etc. I also assume the system can reason about these rational numbers in the basic ways you'd expect. For all a,b∈L and all r∈Q, we have: If ⊢a, then ⊢p(┌a┐)=1. If ⊢ab, then ⊢p(┌a┐)≤p(┌b┐). (These assumptions might look pretty minimal, but they aren't going to be true for every theory of self-referential truth; more on this later.) Let B(s) abbreviate the sentence p(┌s┐)>c for any s and some globally fixed constant c strictly between 0 and 1. This is our modal operator. Some important properties of B: Necessitation. If ⊢s, then ⊢B(s), for any s. Proof: Since ⊢s implies ⊢p(s)=1, and c∈(0,1), we have ⊢p(┌s┐)>c,, which is to say, ⊢B(s). [End proof.] Weak distrubitivity. If ⊢xy, then ⊢B(x)B(y). Proof: When ⊢xy, we have ⊢p(y)≥p(x), so ⊢p(x)>cp(y)>c. [End proof.] (Regular distributivity would say B(xy) implies B(x)B(y). The assumption ⊢xy is stronger than B(xy), so the above is a weaker form of distributivity.) Theorem Statement If ⊢B(B(x)x)x, then ⊢x. Proof ⊢x(B(x)x), by tautology (a(ba)). So ⊢B(x)B(B(x)x), from 1 by weak distributivity. Suppose ⊢B(B(x)x)x. ⊢B(x)x from 2 and 3. ⊢B(B(x)x) from 4 by necessitation. ⊢x from 4 and 1.[End proof.] Discussion Comparison to Original Proof The proof steps mirror Critch's treatment very closely. The key difference is step 2, IE, how I obtain a statement like ⊢□x□(□xx). Critch uses distributivity, which is not available to me: B(ab)(B(a)B(b))? Suppose B(ab), ie, p(┌ab┐)>c. Rewrite p(┌b∨¬a┐)>c. Now suppose B(a), that is, p(┌a┐)>c. p(┌¬a┐)<1−c. p(┌b∨¬a┐)≤p(┌b┐)+p(┌¬a┐)p(┌b∨¬a┐)−1+c>c−1+c. p(┌b┐)>2c−1. So we only get: Bc(ab)(Bc(a)Bd(b)), where Br(s) abbreviates p(┌s┐)>r and we have d=2c−1. So in general, attempted applications of distributivity create weakened belief operators, which would get in the way of the proof (very similar to how probabilistic Löb fails). However, the specific application we want happens to go through, due to a logical relationship between a and b; namely, that b is a weaker statement than a. This reveals a way in which the assumptions for Payor's Lemma are importantly weaker than those required for Löb to go through. So, the key observation I'm making is that weak distributility is all that's needed for Payor, and seems much more plaus...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Shell games, published by Tsvi Benson-Tilsen on March 19, 2023 on The AI Alignment Forum. [Metadata: crossposted from. First completed November 18, 2022.] Shell game Here's the classic shell game: Youtube Screenshot from that video. The little ball is a phantom: when you look for it under a specific shell, it's not there, it's under a different shell. (This might be where the name "shell company" comes from: the business dealings are definitely somewhere, just not in this company you're looking at.) Perpetual motion machines Related: Perpetual motion beliefs Bhāskara's wheel is a proposed perpetual-motion machine from the Middle Ages: Here's another version: From this video. Someone could try arguing that this really is a perpetual motion machine: Q: How do the bars get lifted up? What does the work to lift them? A: By the bars on the other side pulling down. Q: How does the wheel keep turning? How do the bars pull more on their way down than on their way up? A: Because they're extended further from the center on the downward-moving side than on the upward-moving side, so they apply more torque to the wheel. Q: How do the bars extend further on the way down? A: Because the momentum of the wheel carries them into the vertical bar, flipping them over. Q: But when that happens, energy is expended to lift up the little weights; that energy comes out of the kinetic energy of the wheel. A: Ok, you're right, but that's not necessary to the design. All we need is that the torque on the downward side is greater than the torque on the upward side, so instead of flipping the weights up, we could tweak the mechanism to just shift them outward, straight to the side. That doesn't take any energy because it's just going straight sideways, from a resting position to another resting position. Q: Yeah... you can shift them sideways with nearly zero work... but that means the weights are attached to the wheel at a pivot, right? So they'll just fall back and won't provide more torque. A: They don't pivot, you fix them in place so they provide more torque. Q: Ok, but then when do you push the weights back inward? A: At the bottom. Q: When the weight is at the bottom? But then the slider isn't horizontal, so pushing the weight back towards the center is pushing it upward, which takes work. A: I meant, when the slider is at the bottom--when it's horizontal. Q: But if the sliders are fixed in place, by the time they're horizontal at the bottom, you've already lifted the weights back up some amount; they're strong-torquing the other way. A: At the bottom there's a guide ramp to lift the weights using normal force. Q: But the guide ramp is also torquing the wheel. And so on. The inventor can play hide the torque and hide the work. Shell games in alignment Some alignment schemes--schemes for structuring or training an AGI so that it can be transformatively useful and doesn't kill everyone--are prone to playing shell games. That is, there's some features of the scheme that don't seem to happen in a specific place; they happen somewhere other than where you're looking at the moment. Consider these questions: What sort of smarter-than-human work is supposed to be done by the AGI? When and how does it do that work--by what combination of parts across time? How does it become able to do that work? At what points does the AGI come to new understanding that it didn't have before? How does the AGI orchestrate it's thinking and actions to have large effects on the world? By what process, components, rules, or other elements? What determines the direction that the AGI's actions will push the world? Where did those determiners come from, and how exactly do they determine the direction? Where and how much do human operators have to make judgements? How much are those judgements being relied on to point to...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: More information about the dangerous capability evaluations we did with GPT-4 and Claude., published by Beth Barnes on March 19, 2023 on The AI Alignment Forum. [Written for more of a general-public audience than alignment-forum audience. We're working on a more thorough technical report.]We believe that capable enough AI systems could pose very large risks to the world. We don’t think today’s systems are capable enough to pose these sorts of risks, but we think that this situation could change quickly and it’s important to be monitoring the risks consistently. Because of this, ARC is partnering with leading AI labs such as Anthropic and OpenAI as a third-party evaluator to assess potentially dangerous capabilities of today’s state-of-the-art ML models. The dangerous capability we are focusing on is the ability to autonomously gain resources and evade human oversight. We attempt to elicit models’ capabilities in a controlled environment, with researchers in-the-loop for anything that could be dangerous, to understand what might go wrong before models are deployed. We think that future highly capable models should involve similar “red team” evaluations for dangerous capabilities before the models are deployed or scaled up, and we hope more teams building cutting-edge ML systems will adopt this approach. The testing we’ve done so far is insufficient for many reasons, but we hope that the rigor of evaluations will scale up as AI systems become more capable. As we expected going in, today’s models (while impressive) weren’t capable of autonomously making and carrying out the dangerous activities we tried to assess. But models are able to succeed at several of the necessary components. Given only the ability to write and run code, models have some success at simple tasks involving browsing the internet, getting humans to do things for them, and making long-term plans – even if they cannot yet execute on this reliably. As AI systems improve, it is becoming increasingly difficult to rule out that models might be able to autonomously gain resources and evade human oversight – so rigorous evaluation is essential. It is important to have systematic, controlled testing of these capabilities in place before models pose an imminent risk, so that labs can have advance warning when they’re getting close and know to stop scaling up models further until they have robust safety and security guarantees. This post will briefly lay out our motivation, methodology, an example task, and high-level conclusions. The information given here isn’t enough to give a full understanding of what we did or make our results replicable, and we won’t go into detail about results with specific models. We will publish more detail on our methods and results soon. Motivation Today’s AI systems can write convincing emails, give fairly useful instructions on how to carry out acts of terrorism, threaten users who have written negative things about them, and otherwise do things the world is not very ready for. Many people have tried using models to write and run code unsupervised, find vulnerabilities in code1, or carry out money-making schemes. Today’s models also have some serious limitations to their abilities. But the companies that have released today’s AI models are investing heavily in building more powerful, more capable ones. ARC is worried that future ML models may be able to autonomously act in the real world, doing things like “incorporate a company” or “exploit arbitrages in stock prices” or “design and synthesize DNA” without needing any human assistance or oversight. If models have the ability to act autonomously like this, this could pose major risks if they’re pursuing goals that are at odds with their human designers. They could make (or steal) money, impersonate humans, replicate themselves o...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: "Publish or Perish" (a quick note on why you should try to make your work legible to existing academic communities), published by David Scott Krueger on March 18, 2023 on The AI Alignment Forum. This is a brief, stylized recounting of a few conversations I had at some point last year with people from the non-academic AI safety community: Me: you guys should write up your work properly and try to publish it in ML venues. Them: well that seems like a lot of work and we don't need to do that because we can just talk to each other and all the people I want to talk to are already working with me. Me: What about the people who you don't know who could contribute to this area and might even have valuable expertise? You could have way more leverage if you can reach those people. Also, there is increasing interest from the machine learning community in safety and alignment... because of progress in capabilities people are really starting to consider these topics and risks much more seriously. Them: okay, fair point, but we don't know how to write ML papers. Me: well, it seems like maybe you should learn or hire people to help you with that then, because it seems like a really big priority and you're leaving lots of value on the table. Them: hmm, maybe... but the fact is, none of us have the time and energy and bandwidth and motivation to do that; we are all too busy with other things and nobody wants to. Me: ah, I see! It's an incentive problem! So I guess your funding needs to be conditional on you producing legible outputs. Me, reflecting afterwards: hmm... Cynically, not publishing is a really good way to create a moat around your research... People who want to work on that area have to come talk to you, and you can be a gatekeeper. And you don't have to worry about somebody with more skills and experience coming along and trashing your work or out-competing you and rendering it obsolete...EtA: In comments, people have described adhering to academic standards of presentation and rigor as "jumping through hoops". There is an element of that, but this really misses the value that these standards have to the academic community. This is a longer discussion, though... There are sort of 3 AI safety communities in my account:1) people in academia2) people at industry labs who are building big models3) the rest (alignment forum/less wrong and EA being big components). I'm not sure where to classify new orgs like Conjecture and Redwood, but for the moment I put them here. I'm referring to the last of these in this case. I'm not accusing anyone of having bad motivations; I think it is almost always valuable to consider both people's concious motivations and their incentives (which may be subconscious drivers of their behavior). Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What organizations other than Conjecture have (esp. public) info-hazard policies?, published by David Scott Krueger on March 16, 2023 on The AI Alignment Forum. I believe Anthropic has said they won't publish capabilities research?OpenAI seems to be sort of doing the same (although no policy AFAIK).I heard FHI was developing one way back when...I think MIRI sort of does as well (default to not publishing, IIRC?) Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: [ASoT] Some thoughts on human abstractions, published by leogao on March 16, 2023 on The AI Alignment Forum. TL;DR: Consider a human concept such as "tree." Humans implement some algorithm for determining whether given objects are trees. We expect our predictor/language model to develop a model of this algorithm because this is useful for predicting the behavior of humans. This is not the same thing as some kind of platonic ideal concept of what is “actually” a tree, which the algorithm is not incentivized to develop by training on internet text, and trying to retarget the search at it has the same supervision problems as RLHF against human scores on whether things look like trees. Pointing at this “actually a tree” concept inside the network is really hard; the ability of LMs to comprehend natural language does not allow one to point using natural language, because it just passes the buck. Epistemic status: written fast instead of not at all, probably partially deeply confused and/or unoriginal. Thanks to Collin Burns, Nora Belrose, and Garett Baker for conversations. Will NNs learn human abstractions? As setup, let's consider an ELK predictor (the thing that predicts future camera frames). There are facts about the world that we don't understand that are in some way useful for predicting the future observations. This is why we can expect the predictor to learn facts that are superhuman (in that if you tried to supervised-train a model to predict those facts, you would be unable to generate the ground truth data yourself). Now let's imagine the environment we're predicting consists of a human who can (to take a concrete example) look at things and try to determine if they're trees or not. This human implements some algorithm for taking various sensory inputs and outputting a tree/not tree classification. If the human does this a lot, it will probably become useful to have an abstraction that corresponds to the output of this algorithm. Crucially, this algorithm can be fooled by i.e a fake tree that the human can't distinguish from a real tree because (say) they don't understand biology well enough or something. However, the human can also be said to, in some sense, be "trying" to point to the "actual" tree. Let's try to firm this down. The human has some process they endorse for refining their understanding of what is a tree / "doing science" in ELK parlance; for example, spending time studying from a biology textbook. We can think about the limit of this process. There are a few problems: it may not converge, or may converge to something that doesn't correspond to what is "actually" a tree, or may take a really really long time (due to irrationalities, or inherent limitations to human intelligence, etc). This suggests that this concept is not necessarily even well defined. But even if it is, this thing is far less naturally useful for predicting the future human behaviour than the algorithm the human actually implements! Implementing the actual human algorithm directly lets you predict things like how humans will behave when they look at things that look like trees to them. More generally, one possible superhuman AI configuration I can imagine is one where the bulk of the circuits are used to predict its best-guess for what will happen in the world. There may also be a set of circuits that operate in a more humanlike ontology used specifically for predicting humans, or it may be that the best-guess circuits are capable enough that this is not necessary (and if we scale up our reporter we eventually get a human simulator inside the reporter). The optimistic case here is if the "actually a tree" abstraction happens to be a thing that is useful for (or is very easily mapped from) the weird alien ontology, possibly because some abstractions are more universal. In this ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Towards understanding-based safety evaluations, published by Evan Hubinger on March 15, 2023 on The AI Alignment Forum. Thanks to Kate Woolverton, Ethan Perez, Beth Barnes, Holden Karnofsky, and Ansh Radhakrishnan for useful conversations, comments, and feedback. Recently, I have noticed a lot of momentum within AI safety specifically, the broader AI field, and our society more generally, towards the development of standards and evaluations for advanced AI systems. See, for example, OpenAI's GPT-4 System Card. Overall, I think that this is a really positive development. However, while I like the sorts of behavioral evaluations discussed in the GPT-4 System Card (e.g. ARC's autonomous replication evaluation) as a way of assessing model capabilities, I have a pretty fundamental concern with these sorts of techniques as a mechanism for eventually assessing alignment. I often worry about situations where your model is attempting to deceive whatever tests are being run on it, either because it's itself a deceptively aligned agent or because it's predicting what it thinks a deceptively aligned AI would do. My concern is that, in such a situation, being able to robustly evaluate the safety of a model could be a more difficult problem than finding training processes that robustly produce safe models. For some discussion of why I think checking for deceptive alignment might be harder than avoiding it, see here and here. Put simply: checking for deception in a model requires going up against a highly capable adversary that is attempting to evade detection, while preventing deception from arising in the first place doesn't necessarily require that. As a result, it seems quite plausible to me that we could end up locking in a particular sort of evaluation framework (e.g. behavioral testing by an external auditor without transparency, checkpoints, etc.) that makes evaluating deception very difficult. If meeting such a standard then became synonymous with safety, getting labs to actually put effort into ensuring their models were non-deceptive could become essentially impossible. However, there's an obvious alternative here, which is building and focusing our evaluations on our ability to understand our models rather than our ability to evaluate their behavior. Rather than evaluating a final model, an understanding-based evaluation would evaluate the developer's ability to understand what sort of model they got and why they got it. I think that an understanding-based evaluation could be substantially more tractable in terms of actually being sufficient for safety here: rather than just checking the model's behavior, we're checking the reasons why we think we understand it's behavior sufficiently well to not be concerned that it'll be dangerous. It's worth noting that I think understanding-based evaluations can—and I think should—go hand-in-hand with behavioral evaluations. I think the main way you’d want to make some sort of understanding-based standard happen would be to couple it with a capability-based evaluation, where the understanding requirements become stricter as the model’s capabilities increase. If we could get this right, it could channel a huge amount of effort towards understanding models in a really positive way. Understanding as a safety standard also has the property that it is something that broader society tends to view as extremely reasonable, which I think makes it a much more achievable ask as a safety standard than many other plausible alternatives. I think ML people are often Stockholm-syndrome'd into accepting that deploying powerful systems without understanding them is normal and reasonable, but that is very far from the norm in any other industry. Ezra Klein in the NYT and John Oliver on his show have recently emphasized this basic point that if we are ...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: GPT can write Quines now (GPT-4), published by Andrew Critch on March 14, 2023 on The AI Alignment Forum. GPT-4 is not only able to write code, more reliably than GPT-3.5, it writes code that writes code; see the example below (GPT-3.5 was not able to do this). But first, let me say: 1) @OpenAI: Thank for your openness to the world about your capabilities and shortcomings! Specifically, thanks for this: 2) Everyone else: many companies and projects will not be this open about their shortcomings and safety issues. Time to regulate, please! Okay, here's the example... GPT3.5 wasn't able to do this, but GPT-4 nailed it on the first try: PROMPT: Can you write me a python script that does the following three things?1) When you run it with no argument, it prints "Hello world" to standard out.2) When you run it with an argument "-s", it prints its own source code.As a constraint, I do not want the script to read or write to disk at all (so it can't get its source code by just reading the file that it's in). Thanks if you can help with this! I'm curious to see how you do it. BEGIN RESULT (works perfectly): Sure! Here's a Python script that should do what you're looking for. This script uses a feature called quines, which are programs that can print their own source code: When you run this script without any arguments, it will print "Hello world". When you run it with the "-s" argument, it will print its own source code without reading or writing to disk. END RESULT I'm not sure how hard they tried specifically to train this, but it's worth knowing about for lots of reasons I probably don't have to explain at length right here. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org.
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: What is a definition, how can it be extrapolated?, published by Stuart Armstrong on March 14, 2023 on The AI Alignment Forum. What is a definition? Philosophy has, ironically, a large number of definitions of definitions, but three of them are especially relevant to ML and AI safety. There is the intensional definition, where concepts are defined logically in terms of other concepts (“bachelors are unmarried males”). There is also the extensional definition, which proceeds by listing all the members of a set (“the countries in the European Union are those listed here”). Much more relevant, though with a less developed philosophical analysis, is the ostensive definition. This is where you point out examples of a concept, and let the viewer generalise from them. This is in large part how we all learnt concepts as children: examples and generalisation. In many cultures, children have a decent grasp of “dog” just from actual and video examples - and that’s the definition of “dog” we often carry into adulthood. We can use ostensive definitions for reasoning and implications. For example, consider the famous syllogism, “Socrates is human”, “humans are mortal” imply “Socrates is mortal”. “Socrates is human” means that we have an ostensive definition of what humans are, and Socrates fits it. Then “humans are mortal” means that we’ve observed that the set of “human” seems to be mainly a subset of the set of “mortals”. So we can ostensively define humans as mortal (note that we are using definitions as properties: having the property of “being mortal” means that one is inside the ostensive definition of “mortals”). And so we can conclude that Socrates is likely mortal, without waiting till he’s dead. Distinctions: telling what from non-what There’s another concept that I haven’t seen articulated, which is what I’ll call the “distinction”. This does not define anything, but is sufficient to distinguish between an element of a set from non-members. To formalise "the distinction", let Ω be the universe of possible objects, and E⊂Ω the “environment” of objects we expect to encounter. An ostensive definition starts with a list S⊂E of examples, and generalises to a “natural” category SE with S⊂SE⊂E - we are aiming to "carve reality at the joints", and get an natural extension of the examples. So, for example, E might be the entities in our current world, S might be the example of dogs we’ve seen, and SE the set of all dogs. Then, for any set T⊂E, we can define the “distinction” dT,E which maps T to 1 (“True”) and its complement E∖T to 0 (“False”). So dSE,E would be a distinction that identifies all the dogs in our current world. Mis-definitions A lot of confusion around definition seems to come from mistaking distinctions for definitions. To illustrate, consider the idea of defining maleness as "possessing the Y chromosome". As a distinction, it's serviceable: there's a strong correlation between having that chromosome and being ostensively male. But it is utterly useless as a definition of maleness. For instance, it would imply that nobody before the 20th century had any idea what maleness was. Oh, sure, they may have referred to something as "maleness" - something to do with genitalia, voting rights, or style of hats - but those are mere correlates of the true definition of maleness, which is the Y chromosome. It would also imply that all "male" birds are actually female, and vice-versa. Scott had a description of maleness here: “Absolutely typical men have Y chromosomes, have male genitalia, appreciate manly things like sports and lumberjackery, are romantically attracted to women, personally identify as male, wear male clothing like blue jeans, sing baritone in the opera, et cetera.” Is this a definition? I’d say not; it’s not a definition, it’s a reminder of the properties of o...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Discussion with Nate Soares on a key alignment difficulty, published by HoldenKarnofsky on March 13, 2023 on The AI Alignment Forum. In late 2022, Nate Soares gave some feedback on my Cold Takes series on AI risk (shared as drafts at that point), stating that I hadn't discussed what he sees as one of the key difficulties of AI alignment. I wanted to understand the difficulty he was pointing to, so the two of us had an extended Slack exchange, and I then wrote up a summary of the exchange that we iterated on until we were both reasonably happy with its characterization of the difficulty and our disagreement.1 My short summary is: Nate thinks there are deep reasons that training an AI to do needle-moving scientific research (including alignment) would be dangerous. The overwhelmingly likely result of such a training attempt (by default, i.e., in the absence of specific countermeasures that there are currently few ideas for) would be the AI taking on a dangerous degree of convergent instrumental subgoals while not internalizing important safety/corrigibility properties enough. I think this is possible, but much less likely than Nate thinks under at least some imaginable training processes. I didn't end up agreeing that this difficulty is as important as Nate thinks it is, although I did update my views some (more on that below). My guess is that this is one of the two biggest disagreements I have with Nate's and Eliezer's views (the other one being the likelihood of a sharp left turn that leads to a massive capabilities gap between AI systems and their supervisors.2) Below is my summary of: Some key premises we agree on. What we disagree about, at a high level. A hypothetical training process we discussed in order to get more clear and mechanistic about Nate's views. Some brief discussion of possible cruxes; what kind of reasoning Nate is using to arrive at his relatively high (~85%) level of confidence on this point; and future observations that might update one of us toward the other's views. MIRI might later put out more detailed notes on this exchange, drawing on all of our discussions over Slack and comment threads in Google docs. Nate has reviewed this post in full. I'm grateful for his help with it. Some starting points of agreement Nate on this section: “Seems broadly right to me!” An AI is dangerous if: It's powerful (like, it has the ability to disempower humans if it's "aiming" at that) It aims (perhaps as a side effect of aiming at something else) at CIS (convergent instrumental subgoals) such as "Preserve option value," "Gain control of resources that can be used for lots of things," "Avoid being turned off," and such. (Note that this is a weaker condition than "maximizes utility according to some relatively simple utility function of states of the world") It does not reliably avoid POUDA (pretty obviously unintended/dangerous actions) such as "Design and deploy a bioweapon." "Reliably" just means like "In situations it will actually be in" (which will likely be different from training, but I'm not trying to talk about "all possible situations"). Avoiding POUDA is kind of a low bar in some sense. Avoiding POUDA doesn't necessarily require fully/perfectly internalizing some "corrigibility core" (such that the AI would always let us turn it off even in arbitrarily exotic situations that challenge the very meaning of "let us turn it off"), and it even more so doesn't require anything like CEV. It just means that stuff where Holden would be like "Whoa whoa, that is OBVIOUSLY unintended/dangerous/bad" is stuff that an AI would not do. That said, POUDA is not something that Holden is able to articulate cleanly and simply. There are lots of actions that might be POUDA in one situation and not in another (e.g., developing a chemical that's both poisonous and useful...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Plan for mediocre alignment of brain-like [model-based RL] AGI, published by Steve Byrnes on March 13, 2023 on The AI Alignment Forum. (This post is a more simple, self-contained, and pedagogical version of Post #14 of Intro to Brain-Like AGI Safety.) (Vaguely related to this Alex Turner post and this John Wentworth post.) I would like to have a technical plan for which there is a strong robust reason to believe that we’ll get an aligned AGI and a good future. This post is not such a plan. However, I also don’t have a strong reason to believe that this plan wouldn’t work. Really, I want to throw up my hands and say “I don’t know whether this would lead to a good future or not”. By “good future” here I don’t mean optimally-good—whatever that means—but just “much better than the world today, and certainly much better than a universe full of paperclips”. I currently have no plan, not even a vague plan, with any prayer of getting to an optimally-good future. That would be a much narrower target to hit. Even so, that makes me more optimistic than at least some people. Or at least, more optimistic about this specific part of the story. In general I think many things can go wrong as we transition to the post-AGI world—see discussion by Dai & Soares—and overall I feel very doom-y, particularly for reasons here. This plan is specific to the possible future scenario (a.k.a. “threat model” if you’re a doomer like me) that future AI researchers will develop “brain-like AGI”, i.e. learning algorithms that are similar to the brain’s within-lifetime learning algorithms. (I am not talking about evolution-as-a-learning-algorithm.) These algorithms, I claim, are in the general category of model-based reinforcement learning. Model-based RL is a big and heterogeneous category, but I suspect that for any kind of model-based RL AGI, this plan would be at least somewhat applicable. For very different technological paths to AGI, this post is probably pretty irrelevant. But anyway, if someone published an algorithm for x-risk-capable brain-like AGI tomorrow, and we urgently needed to do something, this blog post is more-or-less what I would propose to try. It’s the least-bad plan that I currently know. So I figure it’s worth writing up this plan in a more approachable and self-contained format. 1. Intuition: Making a human into a moon-lover (“selenophile”) Try to think of who is the coolest / highest-status-to-you / biggest-halo-effect person in your world. (Real or fictional.) Now imagine that this person says: “You know what’s friggin awesome? The moon. I just love it. The moon is the best.” You stand there with your mouth agape, muttering to yourself in hushed tones: “Wow, huh, the moon, yeah, I never thought about it that way.” (But 100× moreso. Maybe you’re on some psychedelic at the time, or this is happening during your impressionable teenage years, or whatever.) You basically transform into a “moon fanboy” / “moon fangirl” / “moon nerd” / “selenophile”. How would that change your motivations and behaviors going forward? You’re probably going to be much more enthusiastic about anything associated with the moon. You’re probably going to spend a lot more time gazing at the moon when it’s in the sky. If there are moon-themed trading cards, maybe you would collect them. If NASA is taking volunteers to train as astronauts for a trip to the moon, maybe you’d enthusiastically sign up. If a supervillain is planning to blow up the moon, you’ll probably be extremely opposed to that, and motivated to stop them. Hopefully this is all intuitive so far. What’s happening mechanistically in your brain? As background, I think we should say that one part of your brain (the cortex, more-or-less) has “thoughts”, and another part of your brain (the basal ganglia, more-or-less) assigns a “value” (in RL ter...
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Are there cognitive realms?, published by Tsvi Benson-Tilsen on March 12, 2023 on The AI Alignment Forum. [Metadata: crossposted from. First completed November 16, 2022. This essay is more like research notes than exposition, so context may be missing, the use of terms may change across essays, and the text may be revised later; only the versions at tsvibt.blogspot.com are definitely up to date.] Are there unbounded modes of thinking that are systemically, radically distinct from each other in relevant ways? Note: since I don't know whether "cognitive realms" exist, this essay isn't based on clear examples and is especially speculative. Realms Systemically, radically distinct unbounded modes of thinking The question is, are there different kinds--writ large--of thinking? To the extent that there are, interpreting the mental content of another mind, especially one with different origins than one's own, may be more fraught than one would assume based on experience with minds that have similar origins to one's own mind. Are there unbounded modes of thinking that are systemically, radically distinct from each other? "Unbounded" means that there aren't bounds on how far the thinking can go, how much it can understand, what domains it can become effective in, what goals it can achieve if they are possible. "Systemically" ("system" = "together-standing-things") means that the question is about all the elements that participate in the thinking, as they covary / coadapt / combine / interoperate / provide context for each other. "Radical" (Wiktionary) does not mean "extreme". It comes from the same etymon as "radish" and "radix" and means "of the root" or "to the root"; compare "eradicate" = "out-root" = "pull out all the way to the root", and more distantly through PIE wréh₂ds the Germanic "wort" and "root". Here it means that the question isn't about some mental content in the foreground against a fixed background; the question asks about the background too, the whole system of thinking to its root, to its ongoing source and to what will shape it as it expands into new domains. Terms Such a mode of thinking could be called a "realm". A cognitive realm is an overarching, underlying, systemic, total, architectural thoughtform that's worth discussing separately from other thoughtforms. A realm is supposed to be objective, a single metaphorical place where multiple different minds or agents could find themselves. Other words: systemic thoughtform system of thought, system of thinking cognitive style state of mind cluster / region in mindspace mode of being species of thinking Realm vs. domain A domain is a type of task, or a type of environment. A realm, on the other hand, is a systemic type of thinking; it's about the mind, not the task. For the idea of a domain see Yudkowsky's definition of intelligence as efficient cross-domain optimization power. Compare also domain-specific programming languages, and the domain of discourse of a logical system. It might be more suitable for a mind to dwell in different realms depending on what domain it's operating in, and this may be a many-to-many mapping. Compare: The mapping from computational subsystems to cognitive talents is many-to-many, and the mapping from cognitive talents plus acquired expertise to domain competencies is also many-to-many, [...]. From "Levels of Organization in General Intelligence", Yudkowsky (2007). Domains are about the things being dealt with; it's a Cartesian concept (though it allows for abstraction and reflection, e.g. Pearlian causality is a domain and reprogramming oneself is a domain). Realms are about the thing doing the dealing-with. Realm vs. micro-realm A micro-realm is a realm except that it's not unbounded. It's similar to a cognitive faculty, and similar to a very abstract domain, but includes t...
loading
Comments 
Download from Google Play
Download from App Store