DiscoverPING
Claim Ownership
PING
Author: APNIC
Subscribed: 18Played: 854Subscribe
Share
© Copyright 2025 PING
Description
PING is a podcast for people who want to look behind the scenes into the workings of the Internet. Each fortnight we will chat with people who have built and are improving the health of the Internet.
The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
80 Episodes
Reverse
Welcome back to PING, at the start of 2025. In this episode, Gautam Akiwate, (now with Apple, but at the time of recording with Stanford University) talks about the 2021 Advanced Network Research Prize winning paper, co-authored with Stefan Savage, Geoffrey Voelker and Kimberly Claffy which was titled "Risky BIZness: Risks Derived from Registrar Name Management".
The paper explores a situation which emerged inside the supply chain behind DNS name delegation, in the use of an IETF protocol called Extensible Provisioning Protocol or EPP. EPP is implemented in XML over the SOAP mechanism, and is how registry-registrar communications take place, on behalf of a given domain name holder (the delegate) to record which DNS nameservers have the authority to publish the delegated zone. The problem doesn't lie in the DNS itself, but in the operational practices which emerged in some registrars, to remove dangling dependencies in the systems when domain names were de-registered. In effect they used an EPP feature to rename the dependency, so they could move on with selling the domain name to somebody else.
The problem is that feature created valid names, which could themselves then be purchased. For some number of DNS consumers, those new valid nameservers would then be permitted to serve the domain, and enable attacks on the integrity of the DNS and the web.
Gautam and his co-authors explored a very interesting quirk of the back end systems and in the process helped improve the security of the DNS and identified weaknesses in a long-standing "daily dump" process to provide audit and historical data.
In the last episode of PING for 2024, APNIC’s Chief Scientist Geoff Huston discusses the shift from existing public-private key cryptography using the RSA and ECC algorithms to the world of ‘Post Quantum Cryptography. These new algorithms are designed to withstand potential attacks from large-scale quantum computers and are capable of implementing Shor’s algorithm, a theoretical approach for using quantum computing to break the cryptographic keys of RSA and ECC.
Standards agencies like NIST are pushing to develop algorithms that are both efficient on modern hardware and resistant to the potential threats posed by Shor’s Algorithm in future quantum computers. This urgency stems from the need to ensure ‘perfect forward secrecy’ for sensitive data — meaning that information encrypted today remains secure and undecipherable even decades into the future.
To date, maintaining security has been achieved by increasing the recommended key length as computing power improved under Moore’s Law, with faster processors and greater parallelism. However, quantum computing operates differently and will be capable of breaking the encryption of current public-private key methods, regardless of the key length.
Public-private keys are not used to encrypt entire messages or datasets. Instead, they encrypt a temporary ‘ephemeral’ key, which is then used by a symmetric algorithm to secure the data. Symmetric key algorithms (where the same key is used for encryption and decryption) are not vulnerable to Shor’s Algorithm. However, if the symmetric key is exchanged using RSA or ECC — common in protocols like TLS and QUIC when parties lack a pre-established way to share keys — quantum computing could render the protection ineffective. A quantum computer could intercept and decrypt the symmetric key, compromising the entire communication.
Geoff raises concerns that while post-quantum cryptography is essential for managing risks in many online activities — especially for protecting highly sensitive or secret data—it might be misapplied to DNSSEC. In DNSSEC, public-private keys are not used to protect secrets but to ensure the accuracy of DNS data in real-time.
If there’s no need to worry about someone decoding these keys 20 years from now, why invest significant effort in adapting DNSSEC for a post-quantum world? Instead, he questions whether simply using longer RSA or ECC keys and rotating key pairs more frequently might be a more practical approach.
PING will return in early 2025
This is the last episode of PING for 2024, we hope you’ve enjoyed listening. The first episode of our new series is expected in late January 2025. In the meantime, catch up on all past episodes.
This time on PING, Peter Thomassen from SSE and DEsec.io discusses his analysis of the failure modes of CDS and CDNSKEY records between parent and child in the DNS. These records are used to provide in-band signalling of the DS record, fundamental to the maintenance of a secure path from the trust anchor to the delegation through all the intermediate parent and grandparent domains. Many people use out-of-band methods to update this DS information, but the CDS and the CDNSKEY records are designed to signal this critical information inside the DNS, avoiding many of the pitfalls of passing through a registry-registrar web service.
The problem is, as Peter has discovered, the information across the various nameservers (denoted by the NS record in the DNS) of the child domain can get out of alignment, and the tests a parent zone need to do checking CDS and CDNSKEY information aren't sufficiently specified to wire down this risk.
Peter performed a "meta analysis" inside a far larger cohort of DNS data captured by Florian Steurer and Tobias Fiebig at the Max Planck Institute and discovered a low but persisting error rate, a drift in the critical keying information between a zones NS and the parent. Some of these related to transitional states in the DNS (such as when you move registry or DNS provider) but by no means all, and this has motivated Peter and his co-authors to look at improved recommendations for managing CDS/CDNSKEY data, to minimise the risk of inconsistency, and the consequent loss of secure entry path to a domain name.
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston discusses the slowdown in worldwide IPv6 uptake. Although within the Asia-Pacific footprint we have some truly remarkable national statistics, such as India which is now over 80% IPv6 enabled by APNIC Labs measurements, And Vietnam which is not far behind on 70% the problem is that worldwide, adjusted for population and considering levels of internet penetration in the developed economies, the pace of uptake overall has not improved and has been essentially linear since 2016. In some economies like the US, a natural peak of around 50% capability was reached in 2017 and since then uptake has been essentially flat: There is no sign of closure to a global deployment in the US, and many other economies.
Geoff takes a high level view of the logisitic supply curve with the early adopters, early and late majority, and laggards, and sees no clear signal that there is a visible endpoint, where a transition to IPv6 will be "done". Instead we're facing a continual dual-stack operation of both IPv4 (increasingly behind Carrier Grade Nats (CGN) deployed inside the ISP) and IPv6.
There are success stories in mobile (such as seen in India) and in broadband with central management of the customer router. But, it seems that with the shift in the criticality of routing and numbering to a more name-based steering mechanism and the continued rise of content distribution networks, the pace of IPv6 uptake worldwide has not followed the pattern we had planned for.
In this episode of PING, Vanessa Fernandez and Kavya Bhat, two students from the National Institute of Technology Karnataka (NITK) discuss the student led, multi-year project to deploy IPv6 at their campus. Kavya & Vanessa have just graduated, and are moving into their next stages of work and study in computer sciences and network engineering.
Across 2023 and 2024 they were able to attend IETF118 and IETF119 and present on their project and it’s experiences to the IPv6 working groups and off-Working Group meetings, in part funded by the APNIC ISIF Project and the APNIC Foundation.
This multi-year project is supervised by the NITK Centre for Open-source Software and Hardware (COSH) and has outside review from Dhruv Dhody (ISOC) and Nalini Elkins (Inside Products inc). Former students have also acted as alumni and remain involved in the project as it progresses.
We often focus on IPv6 deployment at scale in the telco sector, or experiences with small deployments in labs, but another side of the IPv6 experience is the large campus network, in scale equivalent to a significant factory or government department deployment but in this case undertaken by volunteer staff, with little or no prior experience of networking technology. Vanessa and Kavya talk about their time on the project, and what they got to present at IETF.
In his regular monthly spot on PING, APNIC’s Chief Scientist, Geoff Huston, discusses a large pool of IPv4 addresses left in the IANA registry, from the classful allocation days back in the mid 1980s. This block, from 240.0.0.0 to 255.255.255.255 encompasses 268 million hosts, which is a significant chunk of address space: it's equivalent to 16 class-A blocks, each of 16 million hosts. Seems a shame to waste it, how about we get this back into use?
Back in 2007 Geoff Paul and myself submitted An IETF Draft which would have removed these addresses from the "reserved" status in IANA and used to supplement the RFC1918 private use block. We felt at the time this was the best use of these addresses because of their apparent un-routability, in the global internet. Almost all IP network stacks at that time shared a lineage with the BSD network code developed at the University of California, and released in 1983 as BSD4.2. Subsequent versions of this codebase included a 2 or 3 line rule inside the Kernel which checked the top 4 bits of the 32 bit address field, and refused to forward packets which had these 4 bits set. This reflected the IANA status marking this range as reserved. The draft did not achieve consensus.
A more recent proposal has emerged from Seth Schoen, David Täht and John Gilmore in 2021 which continues to be worked on, but rather than assigning to RFC1918 internal non-routable puts the address into global unicast use. The authors believe that the critical filter in devices has now been lifted, and no longer persists at large in the BSD and Linux derived codebases. This echoes use of the address space which has been noted inside the Datacentre.
Geoff has been measuring reachability at large to this address space, using the APNIC Labs measurement system and a prefix in 240.0.0.0/4 temporarily assigned and routed in BGP. The results were not encouraging, and Geoff thinks routability of the range remains a very high burden.
In this episode of PING, Nowmay Opalinski from the French Institute of Geopolitics at Paris 8 University discusses his work on resilience, or rather the lack of it, confronting the Internet in Pakistan.
As discussed in his blog post, Nowmay and his colleagues at the French Institute of Geopolitics (IFG), University Paris 8, and LUMS University Pakistan used a combination of technical measurement from sources such as RIPE Atlas, in a methodology devised by the GEODE project, combined with interviews in Pakistan, to explore the reasons behind Pakistan’s comparative fragility in the face of seaborne fibre optical cable connectivity. The approach deliberately combines technical and social-science approaches to exploring the problem space, with quantitative data and qualitative interviews.
Located at the head of the Arabian Sea, but with only two points of connectivity into the global Internet, Pakistan has suffered over 22 ‘cuts’ to the service in the last 20 years, However, as Nowmay explores in this episode, there actually are viable fibre connections to India close to Lahore, which are constrained by politics.
Nowmay is completing a PhD at the institute, and is a member of the GEODE project. His paper on this study was presented at the 2024 AINTEC conference held in Sydney, as part of ACM SIGCOMM 2024.
In his regular monthly spot on PING, APNIC’s Chief Scientist, Geoff Huston, discusses another use of DNS Extensions: The EDNS0 Client Subnet option (RFC 7871). This feature, though flagged in its RFC as a security concern, can help route traffic based on the source of a DNS query. Without it, relying only on the IP address of the DNS resolver can lead to incorrect geolocation, especially when the resolver is outside your own ISP’s network.
The EDNS Client Subnet (ECS) signal can help by encoding the client’s address through the resolver, improving accuracy in traffic routing. However, this comes at the cost of privacy, raising significant security concerns. This creates tension between two conflicting goals: Improving routing efficiency and protecting user privacy.
Through the APNIC Labs measurement system, Geoff can monitor the prevalence of ECS usage in the wild. He also gains insights into how much end-users rely on their ISP’s DNS resolvers versus opting for public DNS resolver systems that are openly available.
In this episode of PING, Joao Damas from APNIC Labs explores the mechanics of the Labs measurement system. Commencing over a decade ago, with an "actionscript" (better known as flash) mechanism, backed by a static ISC Bind DNS configuration cycling through a namespace, the Labs advertising measurement system now samples over 15 million end users per day, using Javascript and a hand crafted DNS system which can synthesise DNS names on-the-fly and lead users to varying underlying Internet Protocol transport choices, packet sizes, DNS and DNSSEC parameters in general, along with a range of Internet Routing related experiments.
Joao explains how the system works, and the mixture of technologies used to achieve the goals. There's almost no end to the variety of Internet behaviour which the system can measure, as long as it's capable of being teased out of the user in a javascript enabled advert backed by the DNS!
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston re-visits the question of DNS Extensions, in particular the EDNS0 option signalling maximum UDP packet size accepted, and it’s effect in the modern DNS.
Through the APNIC Labs measurement system Geoff has visibility of the success rate for DNS events where EDNS0 signalling triggers DNS “truncation” and the consequent re-query in TCP as well as the impact of UDP fragmentation even inside the agreed limit, as well as the ability to handle the UDP packet sizes proffered in the settings.
Read more about EDNS0 and UDP on the APNIC Blog and at APNIC Labs
Revisiting DNS and UDP truncation (Geoff Huston, APNIC Blog July 2024)
DNS TCP Requery failure rate (APNIC Labs)
In this episode of PING, Caspar Schutijser and Ralph Koning from SIDN Labs in the Netherlands discuss their post-quantum testbed project. As mentioned in the previous PING episode about Post Quantum Cryptography (PQC) in DNSSEC with Peter Thomassen from SSE and Jason Goertzen from Sandbox AQ it's vital we understand how this technology shift will affect real-world DNS systems in deployment.
The SIDN Labs system has been designed to be a "one stop shop" for DNS operators to test configurations of DNSSEC for their domain management systems, with a complete virtualised environment to run inside. It's fully scriptable so can be modified to suit a number of different situations and potentially include builds of your own critical software components to include with the system under test.
Read more about the testbed and PQC on the APNIC Blog and at SIDN Labs.
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston continues his examination of DNSSEC. In the first part of this two-part story, Geoff explored the problem space, with a review of the comparative failure of DNSSEC to be deployed by zone holders, and the lack of validation by the resolvers. This is visible to APNIC labs from carefully crafted DNS zones with validly and invalidly signed DNSSEC states, which are included in the Labs advertising method of user measurement.
This second episode offers some hope for the future. It reviews the changes which could be made to the DNS protocol, or use of existing aspects of DNS, to make DNSSEC safer to deploy. There is considerable benefit to having trust in names, especially as a "service" to Transport Layer Security (TLS) which is now ubiquitous worldwide in the web.
This time on PING, Peter Thomassen from deSEC and Jason Goertzen from Sandbox AQ discuss their research project on post quantum cryptography in DNSSEC, funded by NLNet Labs.
Post Quantum cryptography is a response to the risk that a future quantum computer will be able to implement Shor's Algorithm -a mechanism to uncover the private key in the RSA public-private key cryptographic mechanism, as well as Diffie-Hellman and Elliptic Curve methods. This would render all existing public-private based security useless, because with knowledge of the private key by a third party, the ability to sign uniquely over things is lost: DNSSEC doesn't depend on secrecy of messages but it does depend on RSA and elliptic curve signatures. We'd lose trust in the DNSSEC protections the private key provides.
Post Quantum Cryptography (PQC) addresses this by implementing methods which are not exposed to the weakness that Shor's Algorithm can exploit. But, the cost and complexity of these PQC methods rises.
Peter and Jason have been exploring implementations of some of the NIST candidate post quantum algorithms, deployed into bind9 and PowerDNS code. They've been able to use the Atlas system to test how reliably the signed contents can be seen in the DNS and have confirmed that some aspects of packet size in the DNS, and new algorithms will be a problem in deployment as things stand.
As they note, it's too soon to move this work into IETF DNS standards process but there is a continuing interest in researching the space, with other activity underway from SIDN which we'll also feature on PING.
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston discusses DNSSEC and it's apparent failure to deploy at scale in the market after 30 years: Both as the state of signed zone uptake (the supply side) and the low levels of verification seen by DNS client users (the consumption side) there is a strong signal DNSSEC isn't making way, compared to the uptake of TLS which is now ubiquitous in connecting to websites. Geoff can see this by measurement of client DNSSEC use in the APNIC Labs measurement system, and from tests of the DNS behind the Tranco top website rankings.
This is both a problem (the market failure of a trust model in the DNS is a pretty big deal!) and an opportunity (what can we do, to make DNSSEC or some replacement viable) which Geoff explores in the first of two parts.
A classic "cliffhanger" conversation about the problem side of things will be followed in due course by a second episode which offers some hope for the future. In the meantime here's the first part, discussing the scale of the problem.
This time on PING, Philip Paeps from the FreeBSD Cluster Administrators and Security teams discusses their approach to systems monitoring and measurement.
Its eMail.
“Short podcast” you say, but no, there’s a wealth of war-stories and “why” to explore in this episode.
We caught up at the APNIC57/APRICOT meeting held in Bangkok in February of 2024. Philip has a wealth of experience in systems management and security and a long history of participation in the free software movement. So his ongoing of support of email as a fundamental measure of system health isn’t a random decision, it’s based on experience.
Mail may not seem like the obvious go-to for a measurement podcast, but Philip makes a strong case that it’s one of the best tools available for a high-trust measure of how systems are performing, and in the first and second order derivative can indicate aspects of velocity and rate of change of mail flows, indicative of the continuance or change in the underlying systems issues.
Philip has good examples of how Mail from the FreeBSD cluster systems indicates different aspects of systems health. Network delays, disk issues. He’s realistic that there are other tools in the armoury, especially the Nagios and Zabbix systems which are deployed in parallel. But from time to time, the first best indication of trouble emerges from a review of the behaviour of email.
A delightfully simple, and robust approach to systems monitoring can emerge from use of the fundamental tools which are part of your core distribution.
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston discusses the question of subnet structure, looking into the APNIC Labs measurement data which collects around 8 million discrete IPv6 addresses per day, worldwide.
Subnets are a concept which "came along for the ride" in the birth of Internet Protocol, and were baked into the address distribution model as the class-A, class-B and class-C subnet models (there are also class-D and class-E addresses we don't talk about much).
The idea of a sub-net is distinct from a routing network, many pre-Internet models of networking had some kind of public-local split, but the idea of more than one level of structure in what is "local" had to emerge when more complex network designs and protocols came into being.
Subnets are the idea of structure inside the addressing plan, and imply logical and often physical separation of hosts, and structural dependency on routing. There can be subnets inside subnets, its "turtles all the way down" in networks.
IP had an ability out-of-the-box to permit subnets to be defined, and when we moved beyond the classful model into classless inter-domain routing or CIDR, the idea of prefix/length models of networks came to life.
But IPv6 is different, and the assumption we are heading to a net-subnet-host model of networks may not be applicable in IPv6, or in the modern world of high speed complex silicon for routing and switching.
Geoff discusses an approach to modelling how network assignments are being used in deployment, which was raised by Nathan Ward in a recent NZNOG meeting. Geoff has been able to look into his huge collection of IPv6 addresses and see what's really going on.
This time on PING Doug Madory from Kentik discusses his recent measurements of the RPKI system worldwide, and it's visible impact on the stability and security of BGP.
Doug makes significant use of the Oregon RouteViews repository of BGP data, a collection maintained continuously at the University of Oregon for decades. It includes data from back to 1997, originally collected by the NLANR/MOAT project and has archives of BGP Routing Information Base (RIB) dumps taken every two hours from a variety of sources, and made available in both human-readable and machine readable binary formats.
This collection has become the de-facto standard for publicly available BGP state worldwide, along with the RIPE RIS collection. As Doug discusses, research papers which cite Oregon RouteViews data (over 1,000 are known of, but many more exist which have not registered their use of the data) invite serious appraisal because of the reproducibility of the research, and thus the testability of the conclusions drawn. It is a vehicle for higher quality science about the nature of the Internet through BGP.
Doug presented on RPKI and BGP, at the APOPS session held in February at APRICOT/APNIC57 Bangkok, Thailand
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses Starlink again, and the ability of modern TCP flow control algorithms to cope with the highly variant loss and delay seen over this satellite network. Geoff has been doing more measurements using starlink terminals in Australia and the USA, at different times of day exploring the system behaviour.
Starlink has broken new ground in Low Earth Orbit internet services. Unlike Geosynchronous satellite services which have a long delay but constant visibility of the satellite in stationary orbit above, Starlink requires the consumer to continuously re-select a new satellite as they move overhead in orbit. In fact, a new satellite has to be picked every 15 seconds. This means there's a high degree of variability in the behaviour of the link, both between signal quality to each satellite, and in the brief interval of loss ocurring at each satellite re-selection window.
Its a miracle TCP can survive, and in fact in the case of the newer BBR protocol thrive, and achieve remarkably high throughput, if the circumstances permit. This is because of the change from a slow start, fast backoff model used in Cubic and Reno to a much more aggressive link bandwidth estimation model, which continuously probes to see if there is more room to play in.
This time on PING, Dr Mona Jaber from Queen Mary University of London (QMUL), discusses her work exploring IoT, Digital Twins and Social Science led research in the field of networking and telecommunications.
Dr Jaber is a senior lecturer in QMUL and is the founder and director of the Digital Twins for Sustainable Development Goals (DT4SDG) at QMUL. She was one of the invited Keynote speakers at the recent APRICOT/APNIC57 meeting held in Bangkok, and the podcast explores the three major themes explored in her keynote presentation.
The role of deployed fibre optic communication systems in measurement for sustainable green goals
Digital Twin Simulation platforms for exploring the problem space
Social Sciences led research, an inter-disciplinary approach to formulating and exploring problems which has been applied to Sustainable Development-related research through technical innovation in IoT, AI, and Digital Twins.
The Fibre Optic measurement method is Distributed Acoustic Sensor or DAS:
"DAS reuses underground fibre optic cables as distributed strain sensing where the strain is caused by moving objects above ground. DAS is not affected by weather or light and the fibre optic cables are often readily available, offering a continuous source for sensing along the length of the cable. Unlike video cameras, DAS systems also offer a GDPR-compliant source of data."
The DASMATE Project at theengineer.co.uk
This Episode of PING was recorded live in the venue and is a bit noisy compared to the usual recordings, but it's well worth putting up with the background chatter!
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the European Union's consideration of taking a role in the IETF, as itself. Network engineers, policy makers and scientists from all around the world have participated in IETF but this is the first time an entity like the EU has considered participation as itself in the process of standards development.
What's lead to this outcome? What is driving the concern that the EU as a law setting and treaty body, an inter-governmental trade bloc needs to participate in the IETF process? Is this a mis-understanding of the nature of Internet Standards development or does it reflect a concern that standards are diverging from society's needs? Geoff wrote this up in a recent opinion piece on the APNIC Blog and the podcast is a conversation around the topic.
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
United States