PING
Claim Ownership

PING

Author: APNIC

Subscribed: 18Played: 1,018
Share

Description

PING is a podcast for people who want to look behind the scenes into the workings of the Internet. Each fortnight we will chat with people who have built and are improving the health of the Internet.
The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
96 Episodes
Reverse
In this episode of PING, Adli Wahid, APNIC's Security Specialist discusses the APNIC honeypot network, an investment in over 400 collectors distributed throughout the Asia Pacific, collecting data on who is trying to break into systems online and use them for malware, destributed denial of service, and command-and-control systems in the bad traffic economy. Adli discusses how APNIC Members can get access to the results of honeynet traffic capture coming from their network address ranges, and originated from their AS in BGP using the DASH system. and explores some work planned for the APNIC Honeynet systems to extend their systems coverage. As well as publishing reports on APNIC's Blog and presenting at NOG meetings and conferences, Adli has coordinated information sharing from this collector network with a range of international partners such as the Shadow Server Foundation. He continues to offer training and technical assistance in security to the APNIC community and works with the CERT, CSIRT and FIRST community at large.
In this episode of PING, APNIC’s Chief Scientist, Geoff Huston, discusses the economic inevitability of centrality, in the modern Internet. Despite our best intentions, and a lot of long standing belief amongst the IETF technologists, no amount of open standards and end-to-end protocol design prevents large players at all levels of the network (from the physical infrastructure right up to the applications and the data centres which house them) from seeking to acquire smaller competitors, and avoid sharing the space with anyone else. Some of this is a consequence of the drive for efficiency. A part has been fuelled by the effects of Moore’s law, and the cost of capital investment against the time available to recover the costs. In an unexpected outcome, networking has become (to all intents and purposes) “free” and instead of end-to-end, we now routinely expect to get data through highly localised, replicated sources. The main cost these days is land, electric power and air-conditioning. This causes a tendency to concentration, and networks and protocols play very little part in the decision about who acquires these assets, and operates them. The network still exists of course, but increasingly data flows over private links, and is not subject to open protocol design imperatives. A quote from Peter Thiel highlights how the modern Venture Capitalist in our space does not actively seek to operate in a competitive market. As Peter says: “competition is for losers” – It can be hard to avoid the “good” and “bad” labels talking about this, but Geoff is clear he isn’t here to argue what is right or wrong, simply to observe the behaviour and the consequences. Geoff presented on centrality to the Decentralised Internet Research Group or DINRG at the recent IETF meeting held in Madrid, and as he observes, “distributed” is not the same as “decentralised” -we’ve managed to achieve the first one, but the second eludes us.
In this episode of PING, Robert Kisteleki from the RIPE NCC discusses the RIPE Atlas system -a network of over 13,000 measurement devices deployed worldwide in homes, exchange points, stub and transit AS, densely connected regions and sparse island states. Atlas began with a vision of the world at night -a powerful metaphor for where people are, and where technology reaches. Could a measurement system achieve sufficient density to "light up the internet" in a similar manner? Could network measurement be "democratized" to include internet citizens at large? From it's launch at the RIPE 61 meeting held in Rome Italy. with 500 probes based on a small ucLinux device designed as an ethernet converter, to 5 generations of probe hardware and now a soft probe design which can be installed on linux, and an "anchor" device which not only sends tests but can receive them, Atlas has become core technology for network monitoring, measurement and research. Rob discusses the history, design, methodology and futures of this system. A wonderful contribution from the RIPE NCC for the community at large.
A Day in the Life of BGP

A Day in the Life of BGP

2025-07-2301:01:091

In this episode of PING, APNIC’s Chief Scientist, Geoff Huston, discusses "a day in the life of BGP" -Not an extraordinary day, not a special day, just the 8th of May. What happens inside the BGP system, from the point of view of AS4608, one ordinary BGP speaker on the edge of the network? What kinds of things are seen, and why are they seen? Geoff has been measuring BGP for almost it's entire life as the internet routing protocol, but this time looks at the dynamics at a more "micro" level than usual. In particular there are some things about the rate of messages and changes which points to the problems BGP faces. A small number of BGP speakers produce the vast majority of change, and overall the network information BGP speakers have to deal with as a persisting view of the world increases more slowly. Both kinds of message dynamics have to be dealt with. Can we fix this? Is there even anything worth fixing here, or is BGP just doing fine?
In this episode of PING, Doug Madory from Kentik discusses his rundown of the state of play in secure BGP across 2024 and 2025. Kentik has it’s own internal measurements of BGP behaviour and flow data across the surface of the internet, which combined with the Oregon University curated routeviews archive means Doug can analyse both the publicly visible state of BGP from archives, and Kentik’s own view of the dynamics of BGP change, along side other systems like the worldwide RPKI model, and the Internet Routing Registry systems. Doug has written about this before on the APNIC Blog in May of 2024. RPKI demands two outcomes, Firstly that the asset holders who control a given range of Internet Address sign an intent regarding who originates it the ROA, and secondly that the BGP speakers worldwide implement validation of the routing they see, known as Route Origin Validation or ROV. ROA signing is easy, and increases very simply if the delegate uses an RIR hosted system to make the signed objects. ROV is not always simple and has to be deployed carefully so has a slower rate of deployment, and more consequence in costs to the BGP speaker. Doug has been tracking both independently, as well as looking at known routing incidents in the default free zone, and therefore the impact on RPKI active networks, and everywhere else.
Downloading the root

Downloading the root

2025-06-2557:411

In this episode of PING, APNIC’s Chief Scientist, Geoff Huston, discusses the root zone of the DNS, and some emerging concerns in how much it costs to service query load at the root. In the absence of cacheing, all queries in the DNS (except ones the DNS system you ask is locally authoritative for anyway) have to be sent through the root of the DNS, to find the right nameserver to ask for the specific information. Thanks to cacheing, this system doesn't drown in the load of every worldwide query, all the time, going through the root. But, even taking cacheing into account there is an astronomical amount of query seen at the root, and it has two interesting qualities Firstly, its growing significantly faster than the normal rate of growth of the Internet. We're basically at small incremental growth overall in new users, but query load at the root increases significantly faster, even after some more unexpected loads have been reduced. Secondly, almost all of the queries demand the answer "No, that doesn't exist" and the fact most traffic to the root hunts the answer NO means that the nature of distributed DNS cacheing of negative answers isn't addressing the fundamental burden here. Geoff thinks we may be ignoring some recent developments in proving the contents of a zone, the ZONEMD record which is a DNSSEC signed check on the entire zone contents, and emerging systems to download the root zone, and localise all the queries sent onwards into a copy of the root held in the resolver. Basically, "can we do better" -And Geoff thinks, we very probably can.
In this episode of PING, We’re talking to Leslie Daigle from the Global Cyber Alliance (GCA) again, discussing GCA’s honeynet project. Leslie spoke with PING back in January 2024, and in this episode we re-visit things. Honeynets (or Honey farms) are deliberately weakly protected systems put online, to see what kinds of bad traffic exist out in the global Internet, where they come from and what kinds of attack they are mounting. In the intervening period GCA has continued to develop its honeyfarm, building out it’s own systems images, and can now capture more kinds of bad traffic. They have also bedded in the MANRS community, which is now supported by GCA worldwide. In this episode, Leslie is actually asking more questions than providing answers. If we accept that there is now a persisting problem at scale, what kinds of approaches do we need to take to “get on top” of bad traffic? It used to be we thought of this in terms of technical solutions but increasingly Leslie feels we now need to broaden the conversation and take this into Public policy and governance communities, to understand what kinds of social cost we can bear, and what socially driven objectives we want to drive to. The problem is, this is one of the tasks technologists are often the least equipped to do: Talk to people. GCA is showcasing the AIDE system, reachable at https://gcaaide.org/ as a way of opening up the conversation with national strategic policy makers, and the wider community. It’s a simple economy & region model summarising the state of honeynet detected bad traffic levels worldwide, and helps to set an agenda with which the individual ISPs and routing-active community can engage, for their locus of control.
In this episode of PING, APNIC’s Chief Scientist, Geoff Huston, revisits changes underway in how the Domain Name System (DNS) delegates authority over a given zone and how resolvers discover the new authoritative sources. We last explored this in March 2024.  In DNS, the word ‘domain’ refers to a scope of authority. Within a domain, everything is governed by its delegated authority. While that authority may only directly manage its immediate subdomains (children), its control implicitly extends to all subordinate levels (grandchildren and beyond). If a parent domain withdraws delegation from a child, everything beneath that child disappears. Think of it like a Venn diagram of nested circles — being a subdomain means being entirely within the parent’s scope. The issue lies in how this delegation is handled. It’s by way of nameserver (NS) records. These are both part of the child zone (where they are defined) and the parent zone (which must reference them). This becomes especially tricky with DNSSEC. The parent can’t authoritatively sign the child’s NS records because they are technically owned by the child. But if the child signs them, it breaks the trust chain from the parent. Another complication is the emergence of third parties to the delegate, who actually operate the machinery of the DNS. We need mechanisms to give them permission to make changes to operational aspects of delegation, but not to hold all the keys a delegate has regarding their domain name. A new activity has been spun up in the IETF to discuss how to alter this delegation problem by creating a new kind of DNS record, the DELEG record. This is proposed to follow the Service Binding model defined in RFC 9460. Exactly how this works and what it means for the DNS is still up in the air. DELEG could fundamentally change how authoritative answers are discovered, how DNS messages are transported, and how intermediaries interact with the DNS ecosystem. In the future, significant portions of DNS traffic might flow over new protocols, introducing novel behaviours in the relationships between resolvers and authoritative servers.
In this episode of PING, Professor Cristel Pelsser who holds the chair of critical embedded systems at UCLouvain Discusses her work measuring BGP and in particular the system described in the 2024 SIGCOMM “best paper” award winning research: “The Next Generation of BGP Data Collection Platforms” Cristel and her collaborators Thomas Alfroy, Thomas Holterbach, Thomas Krenc and K. C. Claffy have built a system they call GILL, available on the web at https://bgproutes.io This work also features a new service called MVP, to help find the “most valuable vantage point” in the BGP collection system for your particular needs. GILL has been designed for scale, and will be capable of encompassing thousands of peerings. it also has an innovative approach to holding BGP data, focussed on the removal of demonstrably redundant information, and therefore significantly higher compression of the data stream compared to e.g. holding MRT files. The MVP system exploits machine learning methods to aide in the selection of the most advantageous data collection point reflecting a researchers specific needs. Application of ML methods here permits a significant amount of data to be managed and change reflected in the selection of vantage points. Their system has already been able to support DFOH, an approach to finding forged origin attacks from peering relationships seen online in BGP, as opposed to the peering expected both from location, and declarations of intent inside systems like peeringDB.
In this episode of PING, APNIC’s Chief Scientist, Geoff Huston, discusses the history and emerging future of how Internet protocols get more than the apparent link bandwidth by using multiple links and multiple paths. Initially, the model was quite simple, capable of handling up to four links of equal cost and delay reasonably well, typically to connect two points together. At the time, the Internet was built on telecommunications services originally designed for voice networks, with cabling laid between exchanges, from exchanges to customers, or across continents. This straightforward technique allowed the Internet to expand along available cable or fibre paths between two points. However, as the system became more complex, new path options emerged, and bandwidth demands grew beyond the capacity of individual or even equal-cost links, increasingly sophisticated methods for managing these connections had to be developed. An interesting development at the end of this process is the impact of a fully encrypted transport layer on the intervening infrastructure’s ability to manage traffic distribution across multiple links. With encryption obscuring the contents of the dataflow, traditional methods for intelligently splitting traffic become less effective. Randomly distributing data can often worsen performance, as modern techniques rely on protocols like TCP to sustain high-speed flows by avoiding data misordering and packet loss. This episode of PING explores how Internet protocols boost bandwidth by using multiple links and paths, and how secure transport layers affect this process.
Last month, during APRICOT 2025 / APNIC 59, the Internet Society hosted its first Pulse Internet Measurement Forum (PIMF). PIMF brings together people interested in Internet measurement from a wide range of perspectives — from technical details to policy, governance, and social issues. The goal is to create a space for open discussion, uniting both technologists and policy experts. In this second special episode of PING, we continue our break from the usual one-on-one podcast format and present a recap of why the PIMF forum was held, and the last 3 short interviews from the workshop. First we hear a repeat of Amreesh Phokeer's presentation. Amreesh is from the Internet Society and discusses his role in managing the Pulse activity within ISOC. Alongside Robbie Mitchell, Amreesh helped organize the forum, aiming to foster collaboration between measurement experts and policy professionals. Next we hear from Beau Gieskens, a Senior Software Engineer from APNIC Information Products. Beau has been working on the DASH system and discusses his PIMF presentation on a re-design to an event-sourcing model which reduced database query load and improved speed and scaling of the service. We then have Doug Madory from Kentik who presented to PIMF on a quirk in how Internet Routing Registries or IRR are being used, which can cause massive costs in BGP filter configuration and is related to some recent route leaks being seen at large in the default free zone of BGP. Finally, we hear from Lia Hestina from the RIPE NCC Atlas project. Lia is the community Development officer, and focusses on Asia Pacific and Africa for the Atlas project. Lia discusses the Atlas system and how it underpins measurements worldwide, including ones discussed in the PIMF meeting. For more insights from PIMF, be sure to check out the PULSE Forum recording on the Internet Society YouTube feed
DNS Computer says "NO"

DNS Computer says "NO"

2025-04-0244:001

In this episode of PING, APNIC’s Chief Scientist, Geoff Huston, discusses the surprisingly vexed question of how to say ‘no’ in the DNS. This conversation follows a presentation by Shumon Huque at the recent DNS OARC meeting, who will be on PING in a future episode talking about another aspect of the DNS protocol. You would hope this is a simple, straightforward answer to a question, but as usual with the DNS, there are more complexities under the surface. The DNS must indicate whether the labels in the requested name do not exist, whether the specific record type is missing, or both. Sometimes, it needs to state both pieces of information, while other times, it only needs to state one. The problem is made worse by the constraints of signing answers with DNSSEC. There needs to be a way to say ‘no’ authoritatively, and minimize the risk of leaking any other information. NSEC3 records are designed to limit this exposure by making it harder to enumerate an entire zone. Instead of explicitly listing ‘before’ and ‘after’ labels in a signed response denying a label’s existence, NSEC3 uses hashed values to obscure them. In contrast, the simpler NSEC model reveals adjacent labels, allowing an attacker to systematically map out all existing names — a serious risk for domain registries that depend on name confidentiality. This is documented in RFC 7129. Saying ‘no’ with authority also raises the question of where signing occurs — at the zone’s centre (by the zone holder) or at the edge (by the zone server). These approaches lead to different solutions, each with its own costs and consequences. In this episode of PING, Geoff explores the differences between a non-standard, vendor-explored solution, and the emergence of a draft standard in how to say ‘no’ properly.
At the APRICOT/APNIC59 meeting held in Petaling Jaya in Malaysia last month, The internet society held it's first PIMF meeting. PIMF, or the Pulse Internet Measurement Forum is a gathering of people interested in Internet measurement in the widest possible sense, from technical information all the way to policy, governance and social questions. ISOC is interested in creating a space for the discussion to take place amongst the community, and bring both technologists and policy specialists into the same room. This time on PING, instead of the usual one-on-one format of podcast we've got 5 interviews from this meeting, and after the next episode from Geoff Huston at APNIC Labs we'll play a second part, with 3 more of the presenters from this session. First up we have Amreesh Phokeer from the Internet Society who manages the PULSE activity in ISOC, and along with Robbie Mitchell set up the meeting. Then we hear from Christoph Visser from IIJ Labs in Tokyo, who presented on his measurements of the "Steam" Game distribution platform used by Valve Software to share games. It's a complex system of application-specific source selection, using multiple Content Distribution Networks (CDN) to scale across the world, and allows Christoph to see into the link quality from a public API. No extra measurements required, for an insight into the gamer community and their experience of the Internet. The third interview is with Anand Raje, from AIORI-IMN, India’s Indigenous Internet Measurement System. Anand leads a team which has built out a national measurement system using IoT "orchestration" methods to manage probes and anchors, in a virtual-environment which permits them to run multiple independent measurement systems hosted inside their platform. After this there's an interview with Andre Robachevsky from Global Cyber Alliance (GCA). Andre established the MANRS system, it's platform and nurtured the organisation into being inside ISOC. MANRS has now moved into the care of GCA and Andre moved with it, and discusses how this complements the existing GCA activities. FInally we have a conversation with Champika Wijayatunga from ICANN on the KINDNS project. This is a programme designed to bring MANRS-like industry best practice to the DNS community at large, including authoritative DNS delegates and the intermediate resolver and client supporting stub resolver operators. Champika is interested in reaching into the community to get KINDNS more widely understood and encourage its adoption with over 2,000 entities having completed the assessment process already. Next time we'll here from three more participants in the PIMF session: Doug Madory from Kentik, Beau Gieskins from APNIC Information Products, and Lia Hestina, from the RIPE NCC.
Night of the BGP Zombies

Night of the BGP Zombies

2025-03-0558:521

In this episode of PING, APNIC’s Chief Scientist, Geoff Huston explores bgp "Zombies" which are routes which should have been removed, but are still there. They're the living dead of routes. How does this happen? Back in the early 2000s Gert Döring in the RIPE NCC region was collating a state of BGP for IPv6 report, and knew each of the 300 or so IPv6 announcements directly. He understood what should be seen, and what was not being routed. He discovered in this early stage of IPv6 that some routes he knew had been withdrawn in BGP still existed when he looked into the repositories of known routing state. This is some of the first evidence of a failure mode in BGP where withdrawal of information fails to propagate, and some number of BGP speakers do not learn a route has been taken down. They hang on to it. Because BGP is a protocol which only sends differences to the current routing state as and when they emerge (if you start afresh you get a LOT of differences, because it has to send everything from ground state of nothing. But after that, you're only told when new things come and old things go away) it can go a long time without saying anything about a particular route: if its stable and up, nothing to say, and if it was withdrawn, you don't have it, to tell people it's gone, once you passed that on. So if somehow in the middle of this conversation a BGP speaker misses something is gone, as long as it doesn't have to tell anyone it exists, nobody is going to know it missed the news. In more recent times, there has been a concern this may be caused by a problem in how BGP sits inside TCP messages and this has even led to an RFC in the IETF process to define a new way to close things out. Geoff isn't convinced this diagnosis is actually correct or that the remediation proposed is the right one. From a recent NANOG presentation Geoff has been thinking about the problem, and what to do. He has a simpler approach which may work better.
In this episode, Job Snijders discusses RPKIViews, his long term project to collect the "views" of RPKI state every day, and maintain an archive of BGP route validation states. The project is named to reflect route views, the long-standing archive of BGP state maintained by the University of Oregon, which has been discussed on PING. Job is based in the Netherlands, and has worked in BGP routing for large international ISPs and content distribution networks as well as being a board member of the RIPE NCC. He is known for his work producing the Open-Source rpki-client RPKI Validator, implemented in C and distributed widely through the OpenBSD project. RPKI is the Resource PKI, Resource meaning the Internet Number Resources, the IPv4, IPv6 and Autonomous System (AS) numbers which are used to implement routing in the global internet. The PKI provides cryptographic proofs of delegation of these resources and allows the delegates to sign over their intentions originating specific prefixes in BGP, and the relationships between the AS which speak BGP to each other. Why rpkiviews? Job explains that there's a necessary conversation between people involved in the operational deployment of secure BGP, and the standards development and research community: How many of the worlds BGP routes are being protected? How many places are producing Route Origin Attestations (ROA) which are the primary cryptographic object used to perform Route Origin Validation (ROV) and how many objects are made? Whats the error rate in production, the rate of growth, a myriad of introspective "meta" questions need to be asked in deploying this kind of system at scale, and one of the best tools to use, is an archive of state, updated frequently, and as for route views collected from a diverse range of places worldwide, to understand the dynamics of the system. Job is using the archive to produce his annual "RPKI Year in review" report, which was published this year on the APNIC blog (it's posted to operations, research and standards development mailing lists and presented at conferences and meetings normally) and products are being used by the BGPAlerter service developed by Massimo Candela
In his first episode of PING for 2025, APNIC’s Chief Scientist, Geoff Huston returns to the Domain Name System (DNS) and explores the many faces of name servers behind domains. Up at the root, (the very top of the namespace, where all top-level domains like .gov or .au or .com are defined to exist) there is a well established principle of 13 root nameservers. Does this mean only 13 hosts worldwide service this space? Nothing could be farther from the truth! literally thousands of hosts act as one of those 13 root server labels, in a highly distributed worldwide mesh known as "anycast" which works through BGP routing. The thing is, exactly how the number of nameservers for any given domain is chosen, and how resolvers (the querying side of the DNS, the things which ask questions of authoritative nameservers) decide which one of those servers to use isn't as well defined as you might think. The packet sizes, the order of data in the packet, how it's encoded is all very well defined, but "which one should I use from now on, to answer this kind of question" is really not well defined at all. Geoff has been using the Labs measurement system to test behaviour here, and looking at basic numbers for the delegated domains at the root. The number of servers he sees, their diversity, the nature of their deployment technology in routing is quite variable. But even more interestingly, the diversity of "which one gets used" on the resolver side suggests some very old, out of date and over-simplistic methods are still being used almost everywhere, to decide what to do.
RISKY BIZ-ness

RISKY BIZ-ness

2025-01-2244:04

Welcome back to PING, at the start of 2025. In this episode, Gautam Akiwate, (now with Apple, but at the time of recording with Stanford University) talks about the 2021 Advanced Network Research Prize winning paper, co-authored with Stefan Savage, Geoffrey Voelker and Kimberly Claffy which was titled "Risky BIZness: Risks Derived from Registrar Name Management". The paper explores a situation which emerged inside the supply chain behind DNS name delegation, in the use of an IETF protocol called Extensible Provisioning Protocol or EPP. EPP is implemented in XML over the SOAP mechanism, and is how registry-registrar communications take place, on behalf of a given domain name holder (the delegate) to record which DNS nameservers have the authority to publish the delegated zone. The problem doesn't lie in the DNS itself, but in the operational practices which emerged in some registrars, to remove dangling dependencies in the systems when domain names were de-registered. In effect they used an EPP feature to rename the dependency, so they could move on with selling the domain name to somebody else. The problem is that feature created valid names, which could themselves then be purchased. For some number of DNS consumers, those new valid nameservers would then be permitted to serve the domain, and enable attacks on the integrity of the DNS and the web. Gautam and his co-authors explored a very interesting quirk of the back end systems and in the process helped improve the security of the DNS and identified weaknesses in a long-standing "daily dump" process to provide audit and historical data.
Post-Quantum Cryptography

Post-Quantum Cryptography

2024-12-1101:05:44

In the last episode of PING for 2024, APNIC’s Chief Scientist Geoff Huston discusses the shift from existing public-private key cryptography using the RSA and ECC algorithms to the world of ‘Post Quantum Cryptography. These new algorithms are designed to withstand potential attacks from large-scale quantum computers and are capable of implementing Shor’s algorithm, a theoretical approach for using quantum computing to break the cryptographic keys of RSA and ECC. Standards agencies like NIST are pushing to develop algorithms that are both efficient on modern hardware and resistant to the potential threats posed by Shor’s Algorithm in future quantum computers. This urgency stems from the need to ensure ‘perfect forward secrecy’ for sensitive data — meaning that information encrypted today remains secure and undecipherable even decades into the future. To date, maintaining security has been achieved by increasing the recommended key length as computing power improved under Moore’s Law, with faster processors and greater parallelism. However, quantum computing operates differently and will be capable of breaking the encryption of current public-private key methods, regardless of the key length. Public-private keys are not used to encrypt entire messages or datasets. Instead, they encrypt a temporary ‘ephemeral’ key, which is then used by a symmetric algorithm to secure the data. Symmetric key algorithms (where the same key is used for encryption and decryption) are not vulnerable to Shor’s Algorithm. However, if the symmetric key is exchanged using RSA or ECC — common in protocols like TLS and QUIC when parties lack a pre-established way to share keys — quantum computing could render the protection ineffective. A quantum computer could intercept and decrypt the symmetric key, compromising the entire communication. Geoff raises concerns that while post-quantum cryptography is essential for managing risks in many online activities — especially for protecting highly sensitive or secret data—it might be misapplied to DNSSEC. In DNSSEC, public-private keys are not used to protect secrets but to ensure the accuracy of DNS data in real-time. If there’s no need to worry about someone decoding these keys 20 years from now, why invest significant effort in adapting DNSSEC for a post-quantum world? Instead, he questions whether simply using longer RSA or ECC keys and rotating key pairs more frequently might be a more practical approach. PING will return in early 2025 This is the last episode of PING for 2024, we hope you’ve enjoyed listening. The first episode of our new series is expected in late January 2025. In the meantime, catch up on all past episodes.
This time on PING, Peter Thomassen from SSE and DEsec.io discusses his analysis of the failure modes of CDS and CDNSKEY records between parent and child in the DNS. These records are used to provide in-band signalling of the DS record, fundamental to the maintenance of a secure path from the trust anchor to the delegation through all the intermediate parent and grandparent domains. Many people use out-of-band methods to update this DS information, but the CDS and the CDNSKEY records are designed to signal this critical information inside the DNS, avoiding many of the pitfalls of passing through a registry-registrar web service. The problem is, as Peter has discovered, the information across the various nameservers (denoted by the NS record in the DNS) of the child domain can get out of alignment, and the tests a parent zone need to do checking CDS and CDNSKEY information aren't sufficiently specified to wire down this risk. Peter performed a "meta analysis" inside a far larger cohort of DNS data captured by Florian Steurer and Tobias Fiebig at the Max Planck Institute and discovered a low but persisting error rate, a drift in the critical keying information between a zones NS and the parent. Some of these related to transitional states in the DNS (such as when you move registry or DNS provider) but by no means all, and this has motivated Peter and his co-authors to look at improved recommendations for managing CDS/CDNSKEY data, to minimise the risk of inconsistency, and the consequent loss of secure entry path to a domain name.
The IPv6 Transition

The IPv6 Transition

2024-11-1359:47

In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston discusses the slowdown in worldwide IPv6 uptake. Although within the Asia-Pacific footprint we have some truly remarkable national statistics, such as India which is now over 80% IPv6 enabled by APNIC Labs measurements, And Vietnam which is not far behind on 70% the problem is that worldwide, adjusted for population and considering levels of internet penetration in the developed economies, the pace of uptake overall has not improved and has been essentially linear since 2016. In some economies like the US, a natural peak of around 50% capability was reached in 2017 and since then uptake has been essentially flat: There is no sign of closure to a global deployment in the US, and many other economies. Geoff takes a high level view of the logisitic supply curve with the early adopters, early and late majority, and laggards, and sees no clear signal that there is a visible endpoint, where a transition to IPv6 will be "done". Instead we're facing a continual dual-stack operation of both IPv4 (increasingly behind Carrier Grade Nats (CGN) deployed inside the ISP) and IPv6. There are success stories in mobile (such as seen in India) and in broadband with central management of the customer router. But, it seems that with the shift in the criticality of routing and numbering to a more name-based steering mechanism and the continued rise of content distribution networks, the pace of IPv6 uptake worldwide has not followed the pattern we had planned for.
loading
Comments (1)

Alexey Vorobyev

Guys, can you please fix the Geoff's sound? in every episode, that is quite interesting, he has "empty room" echo and lower sound level than George's. Thanks for the things you're doing!

Feb 10th
Reply