PING
Claim Ownership

PING

Author: APNIC

Subscribed: 16Played: 661
Share

Description

PING is a podcast for people who want to look behind the scenes into the workings of the Internet. Each fortnight we will chat with people who have built and are improving the health of the Internet.
The views expressed by the featured speakers are their own and do not necessarily reflect the views of APNIC.
66 Episodes
Reverse
This time on PING, Philip Paeps from the FreeBSD Cluster Administrators and Security teams discusses their approach to systems monitoring and measurement. Its eMail. “Short podcast” you say, but no, there’s a wealth of war-stories and “why” to explore in this episode. We caught up at the APNIC57/APRICOT meeting held in Bangkok in February of 2024. Philip has a wealth of experience in systems management and security and a long history of participation in the free software movement. So his ongoing of support of email as a fundamental measure of system health isn’t a random decision, it’s based on experience. Mail may not seem like the obvious go-to for a measurement podcast, but Philip makes a strong case that it’s one of the best tools available for a high-trust measure of how systems are performing, and in the first and second order derivative can indicate aspects of velocity and rate of change of mail flows, indicative of the continuance or change in the underlying systems issues. Philip has good examples of how Mail from the FreeBSD cluster systems indicates different aspects of systems health. Network delays, disk issues. He’s realistic that there are other tools in the armoury, especially the Nagios and Zabbix systems which are deployed in parallel. But from time to time, the first best indication of trouble emerges from a review of the behaviour of email. A delightfully simple, and robust approach to systems monitoring can emerge from use of the fundamental tools which are part of your core distribution.
In his regular monthly spot on PING, APNIC’s Chief Scientist Geoff Huston discusses the question of subnet structure, looking into the APNIC Labs measurement data which collects around 8 million discrete IPv6 addresses per day, worldwide. Subnets are a concept which "came along for the ride" in the birth of Internet Protocol, and were baked into the address distribution model as the class-A, class-B and class-C subnet models (there are also class-D and class-E addresses we don't talk about much). The idea of a sub-net is distinct from a routing network, many pre-Internet models of networking had some kind of public-local split, but the idea of more than one level of structure in what is "local" had to emerge when more complex network designs and protocols came into being. Subnets are the idea of structure inside the addressing plan, and imply logical and often physical separation of hosts, and structural dependency on routing. There can be subnets inside subnets, its "turtles all the way down" in networks. IP had an ability out-of-the-box to permit subnets to be defined, and when we moved beyond the classful model into classless inter-domain routing or CIDR, the idea of prefix/length models of networks came to life. But IPv6 is different, and the assumption we are heading to a net-subnet-host model of networks may not be applicable in IPv6, or in the modern world of high speed complex silicon for routing and switching. Geoff discusses an approach to modelling how network assignments are being used in deployment, which was raised by Nathan Ward in a recent NZNOG meeting. Geoff has been able to look into his huge collection of IPv6 addresses and see what's really going on.
This time on PING Doug Madory from Kentik discusses his recent measurements of the RPKI system worldwide, and it's visible impact on the stability and security of BGP. Doug makes significant use of the Oregon RouteViews repository of BGP data, a collection maintained continuously at the University of Oregon for decades. It includes data from back to 1997, originally collected by the NLANR/MOAT project and has archives of BGP Routing Information Base (RIB) dumps taken every two hours from a variety of sources, and made available in both human-readable and machine readable binary formats. This collection has become the de-facto standard for publicly available BGP state worldwide, along with the RIPE RIS collection. As Doug discusses, research papers which cite Oregon RouteViews data (over 1,000 are known of, but many more exist which have not registered their use of the data) invite serious appraisal because of the reproducibility of the research, and thus the testability of the conclusions drawn. It is a vehicle for higher quality science about the nature of the Internet through BGP. Doug presented on RPKI and BGP, at the APOPS session held in February at APRICOT/APNIC57 Bangkok, Thailand
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses Starlink again, and the ability of modern TCP flow control algorithms to cope with the highly variant loss and delay seen over this satellite network. Geoff has been doing more measurements using starlink terminals in Australia and the USA, at different times of day exploring the system behaviour. Starlink has broken new ground in Low Earth Orbit internet services. Unlike Geosynchronous satellite services which have a long delay but constant visibility of the satellite in stationary orbit above, Starlink requires the consumer to continuously re-select a new satellite as they move overhead in orbit. In fact, a new satellite has to be picked every 15 seconds. This means there's a high degree of variability in the behaviour of the link, both between signal quality to each satellite, and in the brief interval of loss ocurring at each satellite re-selection window. Its a miracle TCP can survive, and in fact in the case of the newer BBR protocol thrive, and achieve remarkably high throughput, if the circumstances permit. This is because of the change from a slow start, fast backoff model used in Cubic and Reno to a much more aggressive link bandwidth estimation model, which continuously probes to see if there is more room to play in.
This time on PING, Dr Mona Jaber from Queen Mary University of London (QMUL), discusses her work exploring IoT, Digital Twins and Social Science led research in the field of networking and telecommunications. Dr Jaber is a senior lecturer in QMUL and is the founder and director of the Digital Twins for Sustainable Development Goals (DT4SDG) at QMUL. She was one of the invited Keynote speakers at the recent APRICOT/APNIC57 meeting held in Bangkok, and the podcast explores the three major themes explored in her keynote presentation. The role of deployed fibre optic communication systems in measurement for sustainable green goals Digital Twin Simulation platforms for exploring the problem space Social Sciences led research, an inter-disciplinary approach to formulating and exploring problems which has been applied to Sustainable Development-related research through technical innovation in IoT, AI, and Digital Twins. The Fibre Optic measurement method is Distributed Acoustic Sensor or DAS: "DAS reuses underground fibre optic cables as distributed strain sensing where the strain is caused by moving objects above ground. DAS is not affected by weather or light and the fibre optic cables are often readily available, offering a continuous source for sensing along the length of the cable. Unlike video cameras, DAS systems also offer a GDPR-compliant source of data." The DASMATE Project at theengineer.co.uk This Episode of PING was recorded live in the venue and is a bit noisy compared to the usual recordings, but it's well worth putting up with the background chatter!
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the European Union's consideration of taking a role in the IETF, as itself. Network engineers, policy makers and scientists from all around the world have participated in IETF but this is the first time an entity like the EU has considered participation as itself in the process of standards development. What's lead to this outcome? What is driving the concern that the EU as a law setting and treaty body, an inter-governmental trade bloc needs to participate in the IETF process? Is this a mis-understanding of the nature of Internet Standards development or does it reflect a concern that standards are diverging from society's needs? Geoff wrote this up in a recent opinion piece on the APNIC Blog and the podcast is a conversation around the topic.
DNS OARC's many faces

DNS OARC's many faces

2024-03-2040:59

This time on PING we have Phil Regnauld from DNS Operations Analysis & Resource Center (DNS-OARC) talking about the three distinct faces OARC presents to the community. Phil came to the OARC presidents role, replacing Keith Mitchell who was the founding president since 2008 through to this year. Phil previously has worked with the Network Startup Resource Centre (NSRC) and with AFNOG, and the Francophone Internet community at large. DNS OARC has at least 3 distinct faces. It is a community of DNS operators and researchers, who maintain an active ongoing dialogue face to face in workshops and online in the OARC Mattermost community hub. Secondly it is a home, repository and ongoing development environment for DNS related tools such as DNSVIZ (written by Casey Deccio) hosting the AS112 project, and development of the DSC systems amongst many other tools. Thirdly it is the organiser and host of the Day In The Life or DITL activity, the periodic collection of 48-72 hours of DNS traffic from the DNS root operators, and other significant sources of DNS traffic. Stretching back over 10 years DITL is a huge resource for DNS research, providing insights in the use of DNS and its behaviour on-the-wire.
In this episode of PING, APNICs Chief Scientist Geoff Huston discusses a new proposed DNS resource record called DELEG. The record is being designed to aid in managing where a DNS zone is delegated. Delegation is the primary mechanism used in the DNS to separate responsibility between child and parent for a given domain name. The DELEG RR is designed to address several problems, including a goal of moving to new transports for the name resolution service the DNS provides to all other Internet protocols. Additionally, Geoff believes it can help with cost and management issues inherent in out-of-band external domain name management through the registry/registrar process, bound in the whois system and in a protocol called Extensible Provisioning Protocol or EPP. There are big costs here and they include some problems dealing with intermediaries who manage your DNS on your behalf. Unlike whois, EPP, and registrar functions, DELEG would be an in-band mechanism between the parent zone, any associated registry, and the delegated child zone. It’s a classic disintermediation story about improved efficiency and enables the domain name holder to nominate intermediaries for their services, via an aliasing mechanism that has until now eluded the DNS.
This time on PING we have Amreesh Phokeer from the Internet Society (ISOC) talking about a system they operate called Pulse, available at https://pulse.internetsociety.org/. Pulse’s purpose is to assess the “resiliency” of the Internet in a given locality. Similar systems we have discussed before on Ping include APNIC’s DASH service, aimed at resource holding APNIC members, and the MANRS project. Both of these take underlying statistics like resource distribution data, or measurements of RPKI uptake or BGP behaviours and present them to the community, and in the case of MANRS there’s a formalised “score” which shows your ranking against current best practices. The Pulse system measures resilience in four pillars: Infrastructure, Quality, Security and Market Readiness. Some of these are “hard” measures analogous to MANRS and DASH, but Pulse in addition to these kinds of measurements includes “soft” indicators like the economic impacts of design decisions in an economy of interest, the extent of competition, and less formally defined attributes like the amount of resiliency behind BGP transit. This allows the ISOC Pulse system to consider governance-related aspects of the development of Internet, and has a simple scoring model which allows a single health metric analogous to the use of pulse and blood pressure by a physician to assess your condition, but this time applied to the Internet.
DNS is the new BGP

DNS is the new BGP

2024-02-0754:00

In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the role of DNS in directing where your applications connect to, and where content comes from. Although this more “steering” traffic than it “routing” in the strict sense of IP packet forwarding, (that’s still the function of the border gateway protocol or BGP) It does in fact represent a kind of routing decision, to select a content source or server logistically “best” or “closest” to you. So in the spirit of “Orange is the new Black” -DNS is the new BGP. As this change in delivery of content has emerged, the effective control on this kind of routing decision has also become more concentrated, into the hands of the small number of at-scale Content Distribution Networks (CDN) and associated DNS providers worldwide. This is far less than the 80,000 or so BGP speakers with their own AS and represents another trend to be thought about. How we optimise content delivery isn’t decided in common amongst us, its managed by simpler contractual relationships between content owner and intermediaries. The upside of course remains the improvement in efficiency of fetch for each client, the reduction in delay and loss. But the evolution of the Internet over time and the implications for governance in “steering” decisions is going to be of increasing concern. Read more about Geoff’s views of Concentration in the Internet, Governance, and Economics on the APNIC Blog and at APNIC Labs:
In this episode of PING, Leslie Daigle from the Global Cyber Alliance (GCA) discusses their honeynet project, measuring bad traffic internet-wide. This was originally focussed on IoT devices with the AIDE project but is clearly more generally informative. Leslie also discusses the quad-nine DNS service, GCA’s domain trust work and the MANRS project. Launched in 2014 with support from ISOC, MANRS now has a continuing relationship with GCA and may represent a model for the routing community regarding the ‘bad traffic’ problem which the AIDE project explores. Leslie has a long history of work in the public interest, as Chief Internet Technology Officer of the Internet Society, and with the IETF. She is currently the chair of the MOPS working group, has co-authored 22 RFCs and was chair of the IAB for five years.
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the change in IP packet fragmentation behaviour adopted by IPv6, and the implications of a change in IETF “Normative Language” regarding use of IPv6 in the DNS. IPv4 arguably succeeds over so many variant underlying links and networks because it’s highly adaptable to fragmentation in the path. IPv6 has a proscriptive requirement that only the end hosts fragment, which limits how intermediate systems can handle IPv6 data in flight. In the DNS, increasing complexity from things like DNSSEC mean the the DNS packet sizes are getting larger and larger, which risks invoking the IPv6 fragmentation behaviour in UDP. This has consequences for the reliability and timeliness of the DNS service. For this reason, a revision of the IETF normative language (the use of capitalised MUST MAY SHOULD and MUST NOT) directing how IPv6 integrates into the DNS service in deployment has risks. Geoff argues for a “first, do no harm” approach to this kind of IETF document. Read more about IPv6, Fragmentation, the DNS and Geoff’s measurements on the APNIC Blog and APNIC Labs.
In this episode of PING, Sara Dickinson from Sinodun Internet Technologies and Terry Manderson, VP, Information Security and Network Engineering at ICANN discuss the ICANN DNS stats collector system which ICANN commissioned, and Sinodun wrote for them. This system consists of two parts, a DNS stats compactor framework which captures data in the C-DNS format, a specified set of data in CBOR format, and the DNS stats visualiser which is uses Grafana. The C-DNS format is not a complete packet capture but allows the recreation of all the DNS context of the query and response. It was standardised in 2019, in an RFC authored by Sara, her partner John, Jim Hague, John Bond and Terry. Unlike DSC, which is a 5 minute sample aggregation system, this system is able to preserve a significantly larger amount of the seen DNS query information and can even be used to re-create an on-the-wire view of the DNS (albiet not 1 to 1 identical to the original IP packetflows)
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the rise of Low Earth Orbiting (LEO) Satellite based Internet, and the consequences for end-to-end congestion control in TCP and related protocols. Modern TCP has mostly been tuned for constant delay, low loss paths and performs very well at balancing bandwidth amongst the cooperating users of such a link, achieving maximum use of the resource. But a consequence of the new LEO internet is a high degree of variability in delay, loss and consequently an unstable bandwidth, which means TCP congestion control methods aren’t working quite as well in this kind of Internet. A problem is, that with the emergence of TCP bandwidth estimation models such as BBR, and the rise of new transports like QUIC (which continue to use the classic TCP model for congestion control), we have a fundamental mismatch in how competing flows try to share the link. Geoff has been exploring this space with some tests from starlink home routers, and models of satellite visibility. His Labs starlink page shows a visualisation of behaviour of the starlink system, and a movie of views of the satellites in orbit. Read more about TCP, QUIC, LEO and Geoff’s measurements on the APNIC Blog and APNIC Labs
In this episode of PING, Verisign fellow Duane Wessels discusses a late state (version 08) Internet draft he’s working on with two colleagues from Verisign. The draft is on Negative Caching of DNS Resolution Failures and is co-authored by Duane, William Carroll, and Matt Thomas This episode discusses the behaviour of the DNS system overall in the face of failures to answer. There are already mechanisms to deny the existence of a queried name or a specific resource type. There are also mechanisms to define how long this negative answer should be cached, just as there are cache lifetimes defined for how long to hold valid answers, things that do exist, and have been supplied. This time, it’s a cache of not being able to answer. The thing asked about? It might exist, or it might not. This cached data isn’t saying if it does exist or not, it’s a caching failure to be able to answer. As the draft states: “… a non-response due to a resolution failure in which the resolver does not receive any useful information regarding the data’s existence.” Prior DNS specifications did provide guidance on caching in the context of positive responses and negative responses but the only guidance relating to failing to answer was to avoid aggressive re-querying of the nameservers that should be able to answer.
In this episode of PING, instead of a conversation with APNIC’s Chief Scientist Geoff Huston we’ve got a panel session from APNIC56 he facilitated, where Geoff and six guests got to discuss the 30 year history of APNIC. With Geoff on the panel were: Professor Jun Murai known as the ‘father of the Internet’ in Japan. In 1984, he developed the Japan University UNIX Network (JUNET), the first-ever inter-university network in that nation. In 1988, he founded the Widely Integrated Distributed Environment (WIDE) Project, a Japanese Internet research consortium, for which he continues to serve as a board member. Along with Geoff, Jun was one of the main progenitors of what became APNIC. Elise Gerich, a 31 year veteran of Internet networking, is recognised globally for her significant contributions to the Internet. Before retiring, Elise was President of PTI and prior to that, Vice President of IANA at ICANN. Elise served as the Associate Director National Networking at Merit Network in Michigan. While at Merit she was also a Principal Investigator for NSFNET’s T3 Backbone Project and the Routing Arbiter Project and was responsible for much of the early address management Impetus which led to the creation of the RIR system. David Conrad Previously the Chief Technology Officer of ICANN, who was involved in the creation of APNIC as its first full-time employee and founding Director-General. Akinori Maemura the JPNICChief Policy Officer, and a member of the APNIC EC for 16 years, 13 of which he was Chair of the EC. Gaurab Raj Upadhaya Head of WWW Video Delivery Strategy, Prime Video at Amazon. Gaurab has been active in the Internet community for more than a decade and like Akinori served on the APNIC EC for 12 years, 7 of these as Chair of the EC. Paul Wilson has more than thirty years’ involvement with the Internet, including 25 years’ experience as the Director General of APNIC. The Panel discussed the early years of the Internet and the processes which led to the creation of APNIC along with some significant moments in the life of the registry.
In this episode of PING, Stephen Song discusses his work mapping the Internet. This is a long-term project, which he carries out alongside and supported by Mozilla Corporation, and the Association for Progressive Communications (APC). Stephen has long championed the case for Open Data in telecommunications decision-making and maintains a list of resources for capacity building and development of the Internet with a particular focus on Africa. The combination of some opaque business practices and the change from end delivery to mediated proxies from the content distribution network model raises questions about where the things users engage with and depend on are, so network infrastructure can be efficiently and openly planned. The latest episode of PING explores the issues inherent in understanding ‘where things are’ in the modern Internet.
25 Million end-user measurements per day, worldwide, from google advertising.
In june of this year, the Dashboard for AS Health or DASH, a service operated by APNIC saw a leak of approximately 260,000 BGP routes from a vantage point in Singapore, and sent alerts to around 90 subscribers to our routing mis-alignment notification service which is part of DASH. BGP is the state of announcements made and heard worldwide, calculated by every BGP speaker for themselves and although its globally connected and represents “the same” network, not everyone sees all things, as a result of filtering and configuration differences around the globe. BGP also should align with two external information systems, the older Internet Routing Registry (IRR) system which uses a notation called RPSL to represent routing policy data, including the “route” object, and Resource Public Key Infrastructure or RPKI, which represents the origin-AS (in BGP, who originates a given prefix) in a cryptographically signed objected called a ROA. The BGP prefix and origin (the route) should align with whats in an IRR route object and an RPKI ROA, but sometimes these disagree. Thats what DASH is designed to do: tell you when these three information sources fall out of alignment. I discussed this incident, and the APNIC Information Product family (DASH, a collaboration with RIPE NCC called NetOX, and the delegation statistics portal called REX) with Rafael Cintra, the product manager of these systems, and with Dave Phelan who works in the APNIC Academy and has a background in Network Routing Operations. You can find the APNIC Information products here: (note that the DASH service needs a MyAPNIC login to be used) https://dash.apnic.net the DASH portal login page (MyAPNIC resource login needed) https://netox.apnic.net NetOX the Network Observatory web service https://rex.apnic.net Resource Explorer: delegation statistics for the world
In this episode of PING, APNIC’s Chief Scientist Geoff Huston discusses the coming future of VLSI with Moores law coming to an end. This was motivated by a key presentation made at the most recent ANRW session at IETF117, San Francisco. For over 5 decades we have been able to rely on an annual, latterly bi-annual doubling of speed called Moore's Law, and halving of size of the technology inside a microchip: Very Large Scale Integration (VLSI), the basic building block of the modern age being the transistor. From it's beginnings off the back of the diode, replacing valves but still discrete components, to the modern reality of trillions of logic "gates" on a single chip, everything we have built in recent times which includes a computer, has been built under the model "it can only get cheaper next time round" -But for various reasons explored in this episode, that isn't true any more, and won't be true into the future. We're going to have to get used to the idea it isn't always faster, smaller, cheaper, and this will have an impact on how we design Networks, including details inside the protocol stack which go to processing complexity forwarding those packets along the path. A few times, Both Geoff and myself get our prefixes mixed up and may say millimeters for nanometers or even worse on air. We also confused the order of letters in the company Acronym TSMC -The Taiwan Semiconductor Manufacturing Company. Read more about the end of Moore's law on APNIC Blog and the IETF: Chipping Away at Moore's Law (August 2023, Geoff Huston) It’s the End of DRAM As We Know It (July 2023, Philip Levis, IETF117 ANRW session)
loading
Comments