DiscoverPING
Claim Ownership
PING
Author: APNIC
Subscribed: 16Played: 803Subscribe
Share
© Copyright 2024 PING
Description
PING is a podcast for people who want to look behind the scenes into the workings of the Internet. Each fortnight we will chat with people who have built and are improving the health of the Internet.
The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
76 Episodes
Reverse
In this episode of PING, Vanessa Fernandez and Kavya Bhat, two students from the National Institute of Technology Karnataka (NITK) discuss the student led, multi-year project to deploy IPv6 at their campus. Kavya & Vanessa have just graduated, and are moving into their next stages of work and study in computer sciences and network engineering.
Across 2023 and 2024 they were able to attend IETF118 and IETF119 and present on their project and it’s experiences to the IPv6 working groups and off-Working Group meetings, in part funded by the APNIC ISIF Project and the APNIC Foundation.
This multi-year project is supervised by the NITK Centre for Open-source Software and Hardware (COSH) and has outside review from Dhruv Dhody (ISOC) and Nalini Elkins (Inside Products inc). Former students have also acted as alumni and remain involved in the project as it progresses.
We often focus on IPv6 deployment at scale in the telco sector, or experiences with small deployments in labs, but another side of the IPv6 experience is the large campus network, in scale equivalent to a significant factory or government department deployment but in this case undertaken by volunteer staff, with little or no prior experience of networking technology. Vanessa and Kavya talk about their time on the project, and what they got to present at IETF.
In his regular monthly spot on PING, APNIC’s Chief Scientist, Geoff Huston, discusses a large pool of IPv4 addresses left in the IANA registry, from the classful allocation days back in the mid 1980s. This block, from 240.0.0.0 to 255.255.255.255 encompasses 268 million hosts, which is a significant chunk of address space: it's equivalent to 16 class-A blocks, each of 16 million hosts. Seems a shame to waste it, how about we get this back into use?
Back in 2007 Geoff Paul and myself submitted An IETF Draft which would have removed these addresses from the "reserved" status in IANA and used to supplement the RFC1918 private use block. We felt at the time this was the best use of these addresses because of their apparent un-routability, in the global internet. Almost all IP network stacks at that time shared a lineage with the BSD network code developed at the University of California, and released in 1983 as BSD4.2. Subsequent versions of this codebase included a 2 or 3 line rule inside the Kernel which checked the top 4 bits of the 32 bit address field, and refused to forward packets which had these 4 bits set. This reflected the IANA status marking this range as reserved. The draft did not achieve consensus.
A more recent proposal has emerged from Seth Schoen, David Täht and John Gilmore in 2021 which continues to be worked on, but rather than assigning to RFC1918 internal non-routable puts the address into global unicast use. The authors believe that the critical filter in devices has now been lifted, and no longer persists at large in the BSD and Linux derived codebases. This echoes use of the address space which has been noted inside the Datacentre.
Geoff has been measuring reachability at large to this address space, using the APNIC Labs measurement system and a prefix in 240.0.0.0/4 temporarily assigned and routed in BGP. The results were not encouraging, and Geoff thinks routability of the range remains a very high burden.
In this episode of PING, Nowmay Opalinski from the French Institute of Geopolitics at Paris 8 University discusses his work on resilience, or rather the lack of it, confronting the Internet in Pakistan.
As discussed in his blog post, Nowmay and his colleagues at the French Institute of Geopolitics (IFG), University Paris 8, and LUMS University Pakistan used a combination of technical measurement from sources such as RIPE Atlas, in a methodology devised by the GEODE project, combined with interviews in Pakistan, to explore the reasons behind Pakistan’s comparative fragility in the face of seaborne fibre optical cable connectivity. The approach deliberately combines technical and social-science approaches to exploring the problem space, with quantitative data and qualitative interviews.
Located at the head of the Arabian Sea, but with only two points of connectivity into the global Internet, Pakistan has suffered over 22 ‘cuts’ to the service in the last 20 years, However, as Nowmay explores in this episode, there actually are viable fibre connections to India close to Lahore, which are constrained by politics.
Nowmay is completing a PhD at the institute, and is a member of the GEODE project. His paper on this study was presented at the 2024 AINTEC conference held in Sydney, as part of ACM SIGCOMM 2024.
In his regular monthly spot on PING, APNIC’s Chief Scientist, Geoff Huston, discusses another use of DNS Extensions: The EDNS0 Client Subnet option (RFC 7871). This feature, though flagged in its RFC as a security concern, can help route traffic based on the source of a DNS query. Without it, relying only on the IP address of the DNS resolver can lead to incorrect geolocation, especially when the resolver is outside your own ISP’s network.
The EDNS Client Subnet (ECS) signal can help by encoding the client’s address through the resolver, improving accuracy in traffic routing. However, this comes at the cost of privacy, raising significant security concerns. This creates tension between two conflicting goals: Improving routing efficiency and protecting user privacy.
Through the APNIC Labs measurement system, Geoff can monitor the prevalence of ECS usage in the wild. He also gains insights into how much end-users rely on their ISP’s DNS resolvers versus opting for public DNS resolver systems that are openly available.
In this episode of PING, Joao Damas from APNIC Labs explores the mechanics of the Labs measurement system. Commencing over a decade ago, with an "actionscript" (better known as flash) mechanism, backed by a static ISC Bind DNS configuration cycling through a namespace, the Labs advertising measurement system now samples over 15 million end users per day, using Javascript and a hand crafted DNS system which can synthesise DNS names on-the-fly and lead users to varying underlying Internet Protocol transport choices, packet sizes, DNS and DNSSEC parameters in general, along with a range of Internet Routing related experiments.
Joao explains how the system works, and the mixture of technologies used to achieve the goals. There's almost no end to the variety of Internet behaviour which the system can measure, as long as it's capable of being teased out of the user in a javascript enabled advert backed by the DNS!
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston re-visits the question of DNS Extensions, in particular the EDNS0 option signalling maximum UDP packet size accepted, and it’s effect in the modern DNS.
Through the APNIC Labs measurement system Geoff has visibility of the success rate for DNS events where EDNS0 signalling triggers DNS “truncation” and the consequent re-query in TCP as well as the impact of UDP fragmentation even inside the agreed limit, as well as the ability to handle the UDP packet sizes proffered in the settings.
Read more about EDNS0 and UDP on the APNIC Blog and at APNIC Labs
Revisiting DNS and UDP truncation (Geoff Huston, APNIC Blog July 2024)
DNS TCP Requery failure rate (APNIC Labs)
In this episode of PING, Caspar Schutijser and Ralph Koning from SIDN Labs in the Netherlands discuss their post-quantum testbed project. As mentioned in the previous PING episode about Post Quantum Cryptography (PQC) in DNSSEC with Peter Thomassen from SSE and Jason Goertzen from Sandbox AQ it's vital we understand how this technology shift will affect real-world DNS systems in deployment.
The SIDN Labs system has been designed to be a "one stop shop" for DNS operators to test configurations of DNSSEC for their domain management systems, with a complete virtualised environment to run inside. It's fully scriptable so can be modified to suit a number of different situations and potentially include builds of your own critical software components to include with the system under test.
Read more about the testbed and PQC on the APNIC Blog and at SIDN Labs.
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston continues his examination of DNSSEC. In the first part of this two-part story, Geoff explored the problem space, with a review of the comparative failure of DNSSEC to be deployed by zone holders, and the lack of validation by the resolvers. This is visible to APNIC labs from carefully crafted DNS zones with validly and invalidly signed DNSSEC states, which are included in the Labs advertising method of user measurement.
This second episode offers some hope for the future. It reviews the changes which could be made to the DNS protocol, or use of existing aspects of DNS, to make DNSSEC safer to deploy. There is considerable benefit to having trust in names, especially as a "service" to Transport Layer Security (TLS) which is now ubiquitous worldwide in the web.
This time on PING, Peter Thomassen from deSEC and Jason Goertzen from Sandbox AQ discuss their research project on post quantum cryptography in DNSSEC, funded by NLNet Labs.
Post Quantum cryptography is a response to the risk that a future quantum computer will be able to implement Shor's Algorithm -a mechanism to uncover the private key in the RSA public-private key cryptographic mechanism, as well as Diffie-Hellman and Elliptic Curve methods. This would render all existing public-private based security useless, because with knowledge of the private key by a third party, the ability to sign uniquely over things is lost: DNSSEC doesn't depend on secrecy of messages but it does depend on RSA and elliptic curve signatures. We'd lose trust in the DNSSEC protections the private key provides.
Post Quantum Cryptography (PQC) addresses this by implementing methods which are not exposed to the weakness that Shor's Algorithm can exploit. But, the cost and complexity of these PQC methods rises.
Peter and Jason have been exploring implementations of some of the NIST candidate post quantum algorithms, deployed into bind9 and PowerDNS code. They've been able to use the Atlas system to test how reliably the signed contents can be seen in the DNS and have confirmed that some aspects of packet size in the DNS, and new algorithms will be a problem in deployment as things stand.
As they note, it's too soon to move this work into IETF DNS standards process but there is a continuing interest in researching the space, with other activity underway from SIDN which we'll also feature on PING.
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston discusses DNSSEC and it's apparent failure to deploy at scale in the market after 30 years: Both as the state of signed zone uptake (the supply side) and the low levels of verification seen by DNS client users (the consumption side) there is a strong signal DNSSEC isn't making way, compared to the uptake of TLS which is now ubiquitous in connecting to websites. Geoff can see this by measurement of client DNSSEC use in the APNIC Labs measurement system, and from tests of the DNS behind the Tranco top website rankings.
This is both a problem (the market failure of a trust model in the DNS is a pretty big deal!) and an opportunity (what can we do, to make DNSSEC or some replacement viable) which Geoff explores in the first of two parts.
A classic "cliffhanger" conversation about the problem side of things will be followed in due course by a second episode which offers some hope for the future. In the meantime here's the first part, discussing the scale of the problem.
This time on PING, Philip Paeps from the FreeBSD Cluster Administrators and Security teams discusses their approach to systems monitoring and measurement.
Its eMail.
“Short podcast” you say, but no, there’s a wealth of war-stories and “why” to explore in this episode.
We caught up at the APNIC57/APRICOT meeting held in Bangkok in February of 2024. Philip has a wealth of experience in systems management and security and a long history of participation in the free software movement. So his ongoing of support of email as a fundamental measure of system health isn’t a random decision, it’s based on experience.
Mail may not seem like the obvious go-to for a measurement podcast, but Philip makes a strong case that it’s one of the best tools available for a high-trust measure of how systems are performing, and in the first and second order derivative can indicate aspects of velocity and rate of change of mail flows, indicative of the continuance or change in the underlying systems issues.
Philip has good examples of how Mail from the FreeBSD cluster systems indicates different aspects of systems health. Network delays, disk issues. He’s realistic that there are other tools in the armoury, especially the Nagios and Zabbix systems which are deployed in parallel. But from time to time, the first best indication of trouble emerges from a review of the behaviour of email.
A delightfully simple, and robust approach to systems monitoring can emerge from use of the fundamental tools which are part of your core distribution.
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston discusses the question of subnet structure, looking into the APNIC Labs measurement data which collects around 8 million discrete IPv6 addresses per day, worldwide.
Subnets are a concept which "came along for the ride" in the birth of Internet Protocol, and were baked into the address distribution model as the class-A, class-B and class-C subnet models (there are also class-D and class-E addresses we don't talk about much).
The idea of a sub-net is distinct from a routing network, many pre-Internet models of networking had some kind of public-local split, but the idea of more than one level of structure in what is "local" had to emerge when more complex network designs and protocols came into being.
Subnets are the idea of structure inside the addressing plan, and imply logical and often physical separation of hosts, and structural dependency on routing. There can be subnets inside subnets, its "turtles all the way down" in networks.
IP had an ability out-of-the-box to permit subnets to be defined, and when we moved beyond the classful model into classless inter-domain routing or CIDR, the idea of prefix/length models of networks came to life.
But IPv6 is different, and the assumption we are heading to a net-subnet-host model of networks may not be applicable in IPv6, or in the modern world of high speed complex silicon for routing and switching.
Geoff discusses an approach to modelling how network assignments are being used in deployment, which was raised by Nathan Ward in a recent NZNOG meeting. Geoff has been able to look into his huge collection of IPv6 addresses and see what's really going on.
This time on PING Doug Madory from Kentik discusses his recent measurements of the RPKI system worldwide, and it's visible impact on the stability and security of BGP.
Doug makes significant use of the Oregon RouteViews repository of BGP data, a collection maintained continuously at the University of Oregon for decades. It includes data from back to 1997, originally collected by the NLANR/MOAT project and has archives of BGP Routing Information Base (RIB) dumps taken every two hours from a variety of sources, and made available in both human-readable and machine readable binary formats.
This collection has become the de-facto standard for publicly available BGP state worldwide, along with the RIPE RIS collection. As Doug discusses, research papers which cite Oregon RouteViews data (over 1,000 are known of, but many more exist which have not registered their use of the data) invite serious appraisal because of the reproducibility of the research, and thus the testability of the conclusions drawn. It is a vehicle for higher quality science about the nature of the Internet through BGP.
Doug presented on RPKI and BGP, at the APOPS session held in February at APRICOT/APNIC57 Bangkok, Thailand
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses Starlink again, and the ability of modern TCP flow control algorithms to cope with the highly variant loss and delay seen over this satellite network. Geoff has been doing more measurements using starlink terminals in Australia and the USA, at different times of day exploring the system behaviour.
Starlink has broken new ground in Low Earth Orbit internet services. Unlike Geosynchronous satellite services which have a long delay but constant visibility of the satellite in stationary orbit above, Starlink requires the consumer to continuously re-select a new satellite as they move overhead in orbit. In fact, a new satellite has to be picked every 15 seconds. This means there's a high degree of variability in the behaviour of the link, both between signal quality to each satellite, and in the brief interval of loss ocurring at each satellite re-selection window.
Its a miracle TCP can survive, and in fact in the case of the newer BBR protocol thrive, and achieve remarkably high throughput, if the circumstances permit. This is because of the change from a slow start, fast backoff model used in Cubic and Reno to a much more aggressive link bandwidth estimation model, which continuously probes to see if there is more room to play in.
This time on PING, Dr Mona Jaber from Queen Mary University of London (QMUL), discusses her work exploring IoT, Digital Twins and Social Science led research in the field of networking and telecommunications.
Dr Jaber is a senior lecturer in QMUL and is the founder and director of the Digital Twins for Sustainable Development Goals (DT4SDG) at QMUL. She was one of the invited Keynote speakers at the recent APRICOT/APNIC57 meeting held in Bangkok, and the podcast explores the three major themes explored in her keynote presentation.
The role of deployed fibre optic communication systems in measurement for sustainable green goals
Digital Twin Simulation platforms for exploring the problem space
Social Sciences led research, an inter-disciplinary approach to formulating and exploring problems which has been applied to Sustainable Development-related research through technical innovation in IoT, AI, and Digital Twins.
The Fibre Optic measurement method is Distributed Acoustic Sensor or DAS:
"DAS reuses underground fibre optic cables as distributed strain sensing where the strain is caused by moving objects above ground. DAS is not affected by weather or light and the fibre optic cables are often readily available, offering a continuous source for sensing along the length of the cable. Unlike video cameras, DAS systems also offer a GDPR-compliant source of data."
The DASMATE Project at theengineer.co.uk
This Episode of PING was recorded live in the venue and is a bit noisy compared to the usual recordings, but it's well worth putting up with the background chatter!
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the European Union's consideration of taking a role in the IETF, as itself. Network engineers, policy makers and scientists from all around the world have participated in IETF but this is the first time an entity like the EU has considered participation as itself in the process of standards development.
What's lead to this outcome? What is driving the concern that the EU as a law setting and treaty body, an inter-governmental trade bloc needs to participate in the IETF process? Is this a mis-understanding of the nature of Internet Standards development or does it reflect a concern that standards are diverging from society's needs? Geoff wrote this up in a recent opinion piece on the APNIC Blog and the podcast is a conversation around the topic.
This time on PING we have Phil Regnauld from DNS Operations Analysis & Resource Center (DNS-OARC) talking about the three distinct faces OARC presents to the community.
Phil came to the OARC presidents role, replacing Keith Mitchell who was the founding president since 2008 through to this year. Phil previously has worked with the Network Startup Resource Centre (NSRC) and with AFNOG, and the Francophone Internet community at large.
DNS OARC has at least 3 distinct faces. It is a community of DNS operators and researchers, who maintain an active ongoing dialogue face to face in workshops and online in the OARC Mattermost community hub. Secondly it is a home, repository and ongoing development environment for DNS related tools such as DNSVIZ (written by Casey Deccio) hosting the AS112 project, and development of the DSC systems amongst many other tools.
Thirdly it is the organiser and host of the Day In The Life or DITL activity, the periodic collection of 48-72 hours of DNS traffic from the DNS root operators, and other significant sources of DNS traffic. Stretching back over 10 years DITL is a huge resource for DNS research, providing insights in the use of DNS and its behaviour on-the-wire.
In this episode of PING, APNICs Chief Scientist Geoff Huston discusses a new proposed DNS resource record called DELEG. The record is being designed to aid in managing where a DNS zone is delegated.
Delegation is the primary mechanism used in the DNS to separate responsibility between child and parent for a given domain name. The DELEG RR is designed to address several problems, including a goal of moving to new transports for the name resolution service the DNS provides to all other Internet protocols.
Additionally, Geoff believes it can help with cost and management issues inherent in out-of-band external domain name management through the registry/registrar process, bound in the whois system and in a protocol called Extensible Provisioning Protocol or EPP.
There are big costs here and they include some problems dealing with intermediaries who manage your DNS on your behalf.
Unlike whois, EPP, and registrar functions, DELEG would be an in-band mechanism between the parent zone, any associated registry, and the delegated child zone. It’s a classic disintermediation story about improved efficiency and enables the domain name holder to nominate intermediaries for their services, via an aliasing mechanism that has until now eluded the DNS.
This time on PING we have Amreesh Phokeer from the Internet Society (ISOC) talking about a system they operate called Pulse, available at https://pulse.internetsociety.org/. Pulse’s purpose is to assess the “resiliency” of the Internet in a given locality.
Similar systems we have discussed before on Ping include APNIC’s DASH service, aimed at resource holding APNIC members, and the MANRS project. Both of these take underlying statistics like resource distribution data, or measurements of RPKI uptake or BGP behaviours and present them to the community, and in the case of MANRS there’s a formalised “score” which shows your ranking against current best practices.
The Pulse system measures resilience in four pillars: Infrastructure, Quality, Security and Market Readiness. Some of these are “hard” measures analogous to MANRS and DASH, but Pulse in addition to these kinds of measurements includes “soft” indicators like the economic impacts of design decisions in an economy of interest, the extent of competition, and less formally defined attributes like the amount of resiliency behind BGP transit. This allows the ISOC Pulse system to consider governance-related aspects of the development of Internet, and has a simple scoring model which allows a single health metric analogous to the use of pulse and blood pressure by a physician to assess your condition, but this time applied to the Internet.
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the role of DNS in directing where your applications connect to, and where content comes from. Although this more “steering” traffic than it “routing” in the strict sense of IP packet forwarding, (that’s still the function of the border gateway protocol or BGP) It does in fact represent a kind of routing decision, to select a content source or server logistically “best” or “closest” to you. So in the spirit of “Orange is the new Black” -DNS is the new BGP.
As this change in delivery of content has emerged, the effective control on this kind of routing decision has also become more concentrated, into the hands of the small number of at-scale Content Distribution Networks (CDN) and associated DNS providers worldwide. This is far less than the 80,000 or so BGP speakers with their own AS and represents another trend to be thought about. How we optimise content delivery isn’t decided in common amongst us, its managed by simpler contractual relationships between content owner and intermediaries.
The upside of course remains the improvement in efficiency of fetch for each client, the reduction in delay and loss. But the evolution of the Internet over time and the implications for governance in “steering” decisions is going to be of increasing concern.
Read more about Geoff’s views of Concentration in the Internet, Governance, and Economics on the APNIC Blog and at APNIC Labs:
Comments
Top Podcasts
The Best New Comedy Podcast Right Now – June 2024The Best News Podcast Right Now – June 2024The Best New Business Podcast Right Now – June 2024The Best New Sports Podcast Right Now – June 2024The Best New True Crime Podcast Right Now – June 2024The Best New Joe Rogan Experience Podcast Right Now – June 20The Best New Dan Bongino Show Podcast Right Now – June 20The Best New Mark Levin Podcast – June 2024
United States