DiscoverHacker Public Radio
Hacker Public Radio
Claim Ownership

Hacker Public Radio

Author: Hacker Public Radio

Subscribed: 844Played: 32,360
Share

Description

Hacker Public Radio is an podcast that releases shows every weekday Monday through Friday. Our shows are produced by the community (you) and can be on any topic that are of interest to hackers and hobbyists.
705 Episodes
Reverse
This show has been flagged as Clean by the host. This series is dedicated to exploring little-known—and occasionally useful—trinkets lurking in the dusty corners of UNIX-like operating systems. As the zeroth entry of this series, we'll have a little introduction to what it is supposed to be about and why you might want to listen. So that you don't leave without getting at least one piece of useful information, it will end with a little curio that you might find helpful someday. Back in 2010, I was the editor of the newsletter, titled The Open Pitt, for the Western Pennsylvania Linux Users Group in Pittsburgh. We distributed it as a two-page PDF, so had to have enough material to fill each issue. Because we were having some trouble getting contributions, I started writing columns in a series called "UNIX Curio" to occupy the empty space. They were inspired in large part by examples I had seen of people re-inventing ways to do things when utilities for the same purpose had already existed for a long time. The obvious question is: just what is a UNIX Curio? Let's start with the first word, UNIX. While a lot of people write it "Unix" instead, I have chosen to put it in all capitals because that is the way The Open Group, which controls the trademark and the certification process to use it, spells the word 1 . The history of UNIX is complex (search online for more details 2 )—the short version is that many variants emerged, often introducing incompatibilities. Even within AT&T/Bell Laboratories, two major branches came out. The Research UNIX lineage, which includes Seventh Edition (sometimes called Version 7), was often used in universities and government while System III and its more popular successor System V were clearly intended as commercial products 3 . The University of California's BSD was also very influential. My intention is to talk about things that are relatively common; ideally, they would be present on a large majority of systems so you can actually use them. Luckily, there were people who recognized the value in compatibility, so in the mid-1980s they initiated the development of the POSIX standards 4 . Publication of these not only caused commercial UNIX versions to aim for conformance—it gave Free Software implementations of utilities and operating systems a stable base to shoot for rather than having to chase multiple moving targets. As a result, today's GNU/Linux, FreeBSD, NetBSD, and OpenBSD systems generally behave as specified in POSIX, even if they haven't officially earned the UNIX or POSIX labels, so I treat them as part of the UNIX world. Moving on to the second word, "curio," it just means "an object of curiosity, often one considered novel, rare, or bizarre." There are many well-used utilities in the UNIX world, but people forget about others because they are only useful in specific circumstances. And when those circumstances arise, these obscure ones don't always get remembered. One purpose of this series is to point out some of them and describe where they can be appropriately put to use. With the flexible tools available on UNIX systems and the ability to string them together, it shouldn't be surprising that people come up with new ways to accomplish a task. I don't want to claim that these curios are always the best way to do something, just that it can be helpful to know they exist and see the way someone else solved the problem. Also, if you're using an unfamiliar system, sometimes programs you are accustomed to employing might not be installed so it's good to know about options that are widely available. So why am I the person to talk about this subject? I am not a UNIX graybeard with decades of professional computing experience. If I did grow a beard, it would only be partially gray, and my working life has been spent in the engineering world mainly around safety equipment. Sadly, there I have been forced to use Windows almost exclusively. However, in my academic and personal pursuits, I have been involved with using UNIX and Linux for more than 30 years, so I do have a bit of a historical perspective. For some reason, when I encounter an unusual or obscure tool, I want to learn more about it, especially so if I find it to be useful in some way. After gaining that information, I might as well share it with you. In addition, I have been involved with Toastmasters International, a public speaking organization, for about 15 years so I have experience in presenting things orally. I was inspired to turn this article series into podcasts by murph 5 , who delivered a presentation at the 2025 OLF Conference describing how and why to contribute to Hacker Public Radio 6 . The show notes for curios 1 through 3 will consist of the articles as they were originally written (though with references added). Because some examples, especially code, can be difficult to understand when they are read out loud, the podcasts will sometimes present the information in a different way. Show notes for this curio 0 and for curios 4 and later will be written with the podcast format in mind, so they will more closely match what I say. Let's end with an actual curio to kick off the series. Have you ever needed a quick reminder about whether the file you're looking for can be found under the /usr or /var directories? On many UNIX systems, man hier will give you an overview of how the file hierarchy is organized. This manual page is not a standard, but was present in Seventh Edition UNIX 7 and many descendents, direct and indirect, including every Linux distribution I have ever used. There are attempts to standardize the layout; in the Linux world, the Filesystem Hierarchy Standard (FHS) 8 , now hosted by Freedesktop.org 9 , intends to set a path to be followed. It should be noted that systemd has its own idea of how things should be laid out based on the FHS; if it is in use, try man file-hierarchy instead as it will likely be a more accurate description. I hope this gives you a good idea of what to expect in future episodes. The first one will be about shell archives, so keep an eye on Hacker Public Radio's schedule for it to appear. References: The Open Group Trademarks https://www.opengroup.org/trademarks History of Unix https://en.wikipedia.org/wiki/History_of_Unix The Unix Tutorial, Part 3 https://archive.org/details/byte-magazine-1983-10/page/n133/mode/2up POSIX Impact https://sites.google.com/site/jimisaak/posix-impact Correspondent: murph https://hackerpublicradio.org/correspondents/0444.html OLF Conference - December 6th, 2025 https://www.youtube.com/watch?v=hyEunLtqbrA&t=25882 File system hierarchy https://man.cat-v.org/unix_7th/7/hier Finding a successor to the FHS https://lwn.net/Articles/1032947/ Freedesktop.org now hosts the Filesystem Hierarchy Standard https://lwn.net/Articles/1045405/ Provide feedback on this episode.
This show has been flagged as Clean by the host. Thorium Reactors 01 Introduction In this episode we will describe the use of thorium in nuclear power, including what thorium is, how it differs from uranium, and what sort of reactors can use it. 03 What is thorium 05 How thorium differs from uranium 07 Sources of Thorium 09 Why there is interest in using thorium as a fuel 10 Abundance of Thorium 11 Some Countries Have a Lot of It 12 Thorium Breeder Reactors are Simpler than Uranium Breeder Reactors 14 Supposed Lower Nuclear Weapons Potential 16 What is Thorium Breeding 20 Breeding Ratio 21 What sorts of reactors can use thorium 22 PHWRs - Heavy Water Reactors (Including CANDU) 24 HTR - High Temperature Gas Cooled Reactors 26 MSR - Molten Salt Reactors 29 Light Water Reactors (PWR, BWR) 31 Fast Neutron Reactors 32 The Challenges Facing Thorium Fuelled Reactors 37 Thorium in India - An Example Use Case 39 Why is India Pursuing Using Thorium? 40 How a Thorium Fuel Cycle Would Work in India 43 Current Status 46 Conclusion Thorium is an abundant material that is seen as an alternative to uranium in nuclear power. Experimental thorium power reactors date back to at least the 1960s. No new reactor technology is required to use thorium. Existing well proven reactor designs which have been in use for decades can use thorium as fuel. The common light water reactor designs that popular in some countries however are not well suited to using thorium.  Initial interest in thorium was mainly driven by a perception that uranium would be in short supply in future, and slow neutron thorium reactors were cheaper and simpler than fast neutron uranium reactors. However, huge new high grade supplies of uranium were found in a number of countries, causing uranium prices to fall and reducing interest in finding alternatives. While some R&D continues on thorium fuel in a number of countries, the mainstream of development continues to be on uranium based fuel. Some countries with abundant thorium reserves though maintain a major interest in thorium, with India being the prime example.  In the next episode we will describe small modular reactors. Provide feedback on this episode.
This show has been flagged as Clean by the host. These are the commands mentioned in the You may need to use "sudo" to run these commands depending on how your system is configured. strace uptime strace ls 2>&1 | grep open strace -e openat ls / strace ls /does/not/exist strace -o ls-trace.log ls strace -ff -o pid12345-trace.log -p 12345 HISTORY The original strace was written by Paul Kranenburg for SunOS and was inspired by its trace utility. The SunOS version of strace was ported to Linux and enhanced by Branko Lankester, who also wrote the Linux kernel support. Even though Paul released strace 2.5 in 1992, Branko's work was based on Paul's strace 1.5 release from 1991. In 1993, Rick Sladkey took on the project. He merged strace 2.5 for SunOS with the second release of strace for Linux, added many features from SVR4's truss(1), and produced a ver‐ sion of strace that worked on both platforms. In 1994 Rick ported strace to SVR4 and Solaris and wrote the automatic configuration support. In 1995 he ported strace to Irix (and became tired of writing about himself in the third person). Beginning with 1996, strace was maintained by Wichert Akkerman. During his tenure, strace development migrated to CVS; ports to FreeBSD and many architectures on Linux (including ARM, IA-64, MIPS, PA-RISC, PowerPC, s390, SPARC) were introduced. In 2002, responsibility for strace maintenance was transferred to Roland McGrath. Since then, strace gained support for several new Linux architectures (AMD64, s390x, SuperH), bi- architecture support for some of them, and received numerous additions and improvements in system calls decoders on Linux; strace development migrated to Git during that period. Since 2009, strace has been actively maintained by Dmitry Levin. During this period, strace has gained support for the AArch64, ARC, AVR32, Blackfin, C-SKY, LoongArch, Meta, Nios II, OpenRISC 1000, RISC-V, Tile/TileGx, and Xtensa architectures. In 2012, unmaintained and apparently broken support for non-Linux operating systems was removed. Also, in 2012 strace gained support for path tracing and file descriptor path decoding. In 2014, support for stack trace printing was added. In 2016, system call tampering was implemented. For the additional information, please refer to the NEWS file and strace repository commit log. Links https://strace.io https://en.wikipedia.org/wiki/Strace https://www.man7.org/linux/man-pages/man1/strace.1.html Provide feedback on this episode.
This show has been flagged as Clean by the host. We start with Orwellian depictions of the future read about in the 1950/60s. Working in the 1970s at companies such as British Telecom  and the L urgie . We hear about telex , mainframes with magnetic tape , type-writers , and the upskilling of the workforce by the labour-exchange . How did a cold and lack of a home telephone lead to businessmen arriving in a foreign land sans camels? Why were filing cabinets replaced by databases (or were they)? We hear about gaming from a home made version of Pong all the way to Alone in the Dark . Then modern times: we hear about some favourite youtube streams and discover that living in the 2020s is (just about) possible without a smartphone . Provide feedback on this episode.
This show has been flagged as Clean by the host. In our next look at the game mechanics for Civilization V we examine several related topics: Diplomacy, Spies, and Religious Pressure. They are all ways to interact with other players without the force of arms being involved. And we will discuss the Diplomatic Victory, which is a new victory type added in Civilization V and can be fun to play. Playing Civilization V, Part 8 - Diplomacy Other Players With other players you have a relationship based on their approach to you. They are: Neutral – This is not Friendly nor is it Hostile. Trades you make with them will be fair from their point of view Friendly – They like you, and will accept requests from you more often. Trades will be slightly in your favor from their point of view. Afraid – This only happens if you have a a very substantial advantage in strength, so this is rare. They will readily accept requests from you, and trades will be in your favor Guarded – They are suspicious and defensive, and will be more likely to be unfriendly. Trades will be harder to achieve, and favor them rather than you. Deceptive – They will pretend to be friendly, but they are plotting against you. They may bribe other players to declare war on you. They will not accept requests for help, and trades will be hard to achieve. Hostile – They hate you, and are completely open about it. Trade deals, if you can get them, will be heavily against you. War – This means they have decided to go to war with you. But they need the right conditions, so they may pretend to be Friendly, Neutral, Guarded, or Hostile while they wait for those conditions to mature. These are not set in stone, as you can modify how the other player feels towards you by your actions. If you have friends in common that will improve your relationship, or if you have enemies in common. Agreeing to their requests will also improve things. But if you cannot agree, just say so. The worst negative modifier is when you agree to do something, and then do the opposite. Saying no is also negative, but not as bad. Finally, remember that negatives will erode over time if they are not reinforced. If you want a very detailed look at the mechanics and details of this, check out https://civ-5-cbp.fandom.com/wiki/Detailed_Guide_to_Diplomacy. City-States City-States are also important diplomatic partners. We’ll cover all of the benefits in a different section, but here I want to focus on how they enable the Diplomatic Victory. At a certain point the United Nations will be born out of the World Congress, and when this happens a Diplomatic Victory is possible. This will occur when any player reaches the Information Era, or whenever half of the players have reached the Atomic Era. Diplomatic Victory requires that you get the votes of a certain number of delegates to the United Nations. Each player gets delegates based on their population, and there are also some additional delegates you can earn, such as through building the World Wonder Forbidden Palace which gives you two additional delegates. Anyone planning for a Diplomatic Victory should consider building this Wonder as mandatory. But each City-State gets one delegate, and if you are allied with them their delegate is yours. The mechanics of City-State relationships is that they love gifts, and cash is always the best. So anyone planning a Diplomatic Victory would be well-advised to focus on building a large Treasury. You will know when a World Leader vote is coming up in the United Nations, and can make cash drops on any City-States that are not already allied with you before the vote. But watch out that another player doesn’t do the same thing after you and snipe away some of your allies. Also, you can place your spies in City-States to rig elections, and that is another way to get them to ally with you. Spies and Espionage Spies are simply awarded to you whenever any player enters the Renaissance Era. After that you receive another spy each time to advance to another Era. So you can in general have as many as 5 Spies, but if you build the National Intelligence Agency you get one more. This is a National Wonder, and should be a mandatory build if you are going for a Diplomacy victory. And England starts with 1 extra Spy, so if you play as England you could get as many as 7 Spies. Spies can be used for offense or defense. If you station one of your spies in one of your cities it can operate as a counter-spy, and may thwart or even kill an enemy spy. If you are well ahead in technology, that might be a good use, since other players will be trying to steal your tech. But if you are behind, you might want to use your spies to steal tech from other players. You may be successful in this, but the theft does not go unnoticed, and other player may use one of his spies to counter your operation. If you spy is killed, you will get another one in 3-5 turns, but if your spy was a high-rank spy with promotions, that is a serious loss, so you may want to move that spy elsewhere for a while. Diplomats When you assign a spy to the capital of another player you can designate them as a Diplomat. They will take a few turns (depends on game speed, but around 6 turns on normal speeds) to get set up. This is called “Making Introductions”, but the point is that if you need an effective diplomat, don’t wait until the last minute. Diplomats can be useful in several ways. Early on, they allow you to trade votes in the World Congress. And they will bring you intelligence about intrigues, and you can then share that with other players. And it can also give you a view of the other player’s City Screen. Once you have researched Globalization your Diplomats can help with a Diplomatic Victory because each one counts as one additional vote in the United Nations for World Leader. You can change a spy into a Diplomat and vice versa just by moving the Spy/Diplomat from its current location to another location, which will trigger the ability to change the job assignment. This means that when you first get Spies, and they cannot yet be used to get additional Delegate votes as Diplomats, you can assign them to City-States, where they can help you get alliances. Then as you start to research Globalization, move them to the capitals of other players and turn them into Diplomats. This of course assumes you want to win a Diplomatic victory. If instead you are going for a Science victory and are ahead in Science, it is probably best to station them in your own cities to do counter-intelligence work. If you are ahead in Science, other players will be trying to steal tech from you. Religious Pressure If you have researched all of the Piety Social Policy Tree, you will have option to choose a Reformation Belief to add to your religion. One of these, Underground Sect, allows your spies to exert religious pressure against the city they have been sent to. However, this effect is fairly small. If there is not a Follower of your religion in the city, it seems to do nothing. But in combination it can flip cities to your religion. Start by sending in a Missionary to spread your religion, then your spy can add to that. And you should also combine that with a trade route to add additional religious pressure. And by gradually moving your spies, missionaries, and trade routes from city to city, you can make your religion dominant in a region. Diplomatic Victory This can be a fun way to win, and I have done it. If you want to get a leg up, start with a Civ that gives you advantages, such as Greece or Venice (although my last diplomatic Victory was achieved with Ethiopia, which is generally regarded as a military/domination Civ. You can win any victory type with any civ, and it can be fun to “play against type”). Greece gets an advantage from relations with City-States, which are key to a Diplomatic Victory because each one gets a vote for World Leader. And Venice is interesting because you cannot build settlers. But you can use cash to puppet City-States, and you can purchase units in puppeted City-States as well. Cash is king in the Venice strategy, and you will want to get as many Trade Routes as possible. The first two should send Food to Venice to help boost your population. Since you will only ever have one city as Venice you will want to max it out. All trade routes after that should focus on cash. Use your cash to purchase or upgrade military units, and employ a defensive strategy. You want enough military to deter any aggression against you, but you should avoid making any hostile moves against others if possible. Remember, this is a strategy for a Diplomatic Victory. If you want to go to war, don’t choose Venice. Instead choose one of the Domination Civs, like the Zulus or the Mongols. Links: https://civ-5-cbp.fandom.com/wiki/Detailed_Guide_to_Diplomacy https://www.palain.com/gaming/civilization-v/playing-civilization-v-part-8/ Provide feedback on this episode.
This show has been flagged as Clean by the host. Create a Linux kiosk at your library Start without a guest account The first few steps of this process don’t actually require a guest user directory to exist, so do NOT create your guest user account yet. However, you do need to choose what your guest user account is going to be called. A reasonable account name for Don’s purposes is libraryguest. On my personal computer I call my guest account guestaccount, and I’ve used kioskguest on some installations. I avoid just the name “guest” because in modern computing the term “guest” gets used in a few other ways (such as a “guest operating system” in a virtual environment), and it’s just easier to find something unique in logs. Choose a unique name for you guest account, but don’t create it yet. For this article, I’m using libraryguest. Create the PostSession script By default, GDM recognises several states: Init, PostLogin, PreSession, and PostSession. Each state has a directory located in /etc/gdm. When you place a shell script called Default in one of those directories, GDM runs the script when it reaches that state. To trigger actions to clean up a user’s environment upon logout, create the file /etc/gdm/PostSession/Default. You can add whatever actions you want to run upon logout to the Default script. In the case of Don’s library, we wanted to clear everything from the guest’s home directory, including browser history, any LibreOffice files or GIMP files they may have created, and so on. It was important that we limited the very drastic action of removing all user data to just the guest user. We didn’t want the admin’s data to be erased upon logout, so whatever rule we added to /etc/gdm/PostSession/Default had to be limited to the guest user. Here’s what we came up with: #!/usr/bin/sh echo "$USER logged out at `date`" >> /tmp/PostSession.log if [ "X$USER" = "Xlibraryguest" ]; then rm -rf "$HOME" fi exit 0 The first line is for logging purposes. The /tmp directory gets cleared out on most distributions automatically, so we weren’t worried about creating a file that’ll grow forever and eventually crash the computer. If your distribution of choice doesn’t clean out /tmp automatically, create a cron job to do that for you. GDM knows what user triggered the logout process, so the if statement verifies that the user logging out is definitely the libraryguest user (that’s the literal name of the user we created for library patrons).Note that the whitespace around the square brackets is important, so be precise when typing! As long as it is libraryguest, then the script removes the entire user directory ($HOME). That can be extremely dangerous if you make a mistake, so do thorough testing on a dummy system before implementing a script like this! If you get a condition wrong, you could erase your entire home directory upon logout. In this example, I’ve successfully limited the rm command to a logout action performed by user libraryguest. The entire /home/libraryguest directory is erased, and the computer returns to the GDM login screen. When a new user logs in, a fresh directory is created for the user. You can put any number of commands in your script, of course. You don’t have to erase an entire directory. If all you really want to do is clear browser history and any stray data, then you can do that instead. If you need to copy specific configuration files into the environment, you can do that during the PreSession state. Just be sure to test thoroughly before committing your creation to your users! What happens when the guest doesn’t log out At this point, the computer erases all of the user’s data when the user logs out, but a reboot or a shutdown is different to a logout. GDM doesn’t enter a PostSession state after a reboot signal has been received, even if the reboot occurs during an active GDM session. The easiest and safest way to erase an entire home directory when there’s a cut to system power is to use a temporary RAM filesystem (tmpfs) to house the data in the first place. If the systems you’re configuring have 8 GB or more, and the system is exclusively used as a guest computer, you can probably afford to use RAM as the guest’s home directory. If your system doesn’t have a lot of RAM, then you can use the systemd work-around in the next section. Assuming you have the RAM to spare, and that your systems are supported by a backup power supply, you can add a tmpfs entry in /etc/fstab. In this example, my tmpfs is mounted to /home/libraryguest and is just 2 GB: tmpfs /home/libraryguest tmpfs rw,nosuid,nodev,size=2G 0 0 That’s plenty of space for some Internet browsing and even a few LibreOffice documents to be saved while a user works. Mount the new volume: $ sudo mount /home/libraryguest Next, you must create the libraryguest user manually in a terminal.The useradd command creates user profiles: $ sudo useradd --home-dir /home/libraryguest libraryguest useradd: warning: the home directory /home/libraryguest/ already exists. useradd: Not copying any file from skel directory into it. Because you’ve already created a location for the home directory, you do get a warning after creating the user. It’s only a warning, not a fatal error, and the guest account is automatically populated later. Create a password for the new user: $ sudo passwd libraryguest That’s it! You’ve created a guest account that refreshes with every logout and every reboot. You can skip over the next section of this article. Using systemd targets instead of a ramdisk Assuming you can’t create a ramdisk for temporary user data, you can instead create a systemd service that runs a script when the reboot, poweroff, and multi-user targets are triggered: [Unit] Description=Kiosk cleanup [Service] Type=oneshot ExecStart=/usr/local/bin/kiosk-cleanup.sh [Install] WantedBy=poweroff.target reboot.target multi-user.target Save the file to /etc/systemd/system/kioskmode.service and then enable it: $ sudo systemctl enable --now kioskmode The script, like the GDM script, removes the libraryguest directory. Unlike GDM script, this one must also recreate an empty home directory and grant it user permissions: #!/usr/bin/bash rm -rf /home/libraryguest mkdir /home/libraryguest chown -R libraryguest:libraryguest /home/libraryguest Grant the script itself permission to run: $ sudo chmod +x /usr/local/bin/kiosk-cleanup.sh Now the libraryguest user data is erased after: Logout Reboot Shutdown Startup Essentially, no matter how the computer loses its session or its power, the libraryguest account starts fresh when a new session is started. Security and privacy Using systemd to erase data at shutdown and startup isn’t strictly as secure as using a temporary ramdisk for all user data. Should the computer lose power suddenly, all saved user data in the libraryguest account is present during the next boot. Of course, it’s erased as soon as multi-user.target is called by systemd, but it is technically possible to interrupt the boot process and mine for data. You must use full drive encryption to protect data from being discovered by an interrupted boot sequence. Why not just use xguest On many Linux distributions, the xguest package is designed to provide the Guest account, which resets after each logout. It was an extremely useful package that I installed on every machine I owned, because it’s handy to be able to let friends use my computer without risking them making a mess of my home directory. Lately, it seems that xguest is failing to launch a desktop, however, presumably because it relies on X11. If xguest works for you in your tests, then you may want to use it instead of the solution I’ve presented here. My solution offers a lot of flexibility, thanks to GDM’s autodetection of session states. Kiosks in libraries Privacy and personal information is more important than ever. Regardless of how you setup a kiosk for your library, you have an obligation to your users to keep them informed of how their data is being stored. This goes both ways. Users need to know that their data is destined to be erased as soon as they log out, and also they deserve to be assured that their data is not retained. However, it’s also your responsibility to admit that glitches and exceptions could occur. Users need to understand that the computer they’re using are public computers on a public network. Encryption is being used for traffic and for data storage, but you cannot guarantee absolute privacy. As long as everyone understands the arrangement, everyone can compute with confidence. Linux, GDM, and systemd are great tools to help libraries create a sustainable, robust, honest, and communal computing platform. Show notes taken from https://www.both.org/?p=13327Provide feedback on this episode.
This show has been flagged as Clean by the host. 1985, I started to work at a telecom equipment manufacturer. We had a main frame computer in our combined office- and lab room. We were four sitting in the room and it was this one terminal for all of us and maybe even for someone more. Downstairs, we at component technology department had our big climate controlled laboratories. I used an HP 85 computer having the Basic programming language to automize measurements of resistors. And there were several more of them for other measurements of various electronic components. Also more advanced computers were used in the labs and as I recall also with other languages than Basic. I remember I learned briefly a bit about one of those languages but have forgotten which one. The secretary at the department could send Telex messages around the world. We handed a hand written manuscript to her and she typed it into the Telex system. And she had a Xerox computer with big, at least the 8 inch floppy discs. Not so many years later my manager got a Personal computer running DOS and some years later it DOS computers also to the staff. But also very early we had a Sun Unix station. And for many years Unix became my daily driver at work. Before I started to work, in school we had some education in Basic programming. We were using the at least in Sweden very successful and good Luxor ABC 80 computer. At the end of my school time, my school got the top notch ABC 800 with colour screen. At home so I could get a chance to learn somewhat more about computers and Basic programming in my own pace, I got a Zinclair ZX 80 computer, which I later upgraded to ZX 81. One summer job when I was a student I was at Televerket, the Swedish PTT. It meant that I visited numerous of exchange stations. Many at the country side, some with very few subscribers so I could hear the relay start when someone was making a call. At bigger stations it was noise from relays all the time. As I mentioned, after studies were completed I was working with telecom equipment in particular for land line telephony. Not at least I worked with components for the line cards, the card at the telephone exchange that is facing towards the end user. The book The_Cuckoo's_Egg is a hacker thriller based on a true story that happened in the mid-1980's going on for a year. It was written by the hunter shortly after. Cliff Stoll describes Unix commands, which are similar to Linux. He talks about passwords, about encryption and a lot more. Many technical details he describes by using analogy with more common non technical life examples. A security hole in GNU-Emacs software, a software still around today, plays a central role in how the hacker could penetrate. To fix and update security holes are very relevant today as well. Many things in computers and technology have changed. But at the same time very much of the problems are valid today although they are somewhat different. And the way he describes technical details for the non-technical reader are relevant also today, I believe. At the same time as the book has many technical details, he also describes the daily life at home, the left wing culture he belonged to at the university, his long hair and the dress code he belonged to. And the music. He also describes his contacts to numerous authorities and frustration in those contacts. I am very impressed of his analytical research approach, his persistence, his skills and inventiveness including inventiveness of his girl friend and others. One take away for me is that he kept a detailed log book. It is an important research tool. The log book together with the print outs of exactly what the hacker did were core references for analyzing and make conclusions, retract and change conclusions when new information lead to that earlier assumptions were wrong. He also wrote a technical paper about it before he wrote the book. For those interested, there are several videos with him of later date on various topics.Provide feedback on this episode.
This show has been flagged as Clean by the host. Warning, this episode containers some spoilers for movies. The following movies are in my cybersecurity movie library. The ones marked * are included in review in this episode. 2001: A Space Odyssey (1968) * AntiTrust (2001) Blackhat (2015) Blade Runner (1982) Catch Me If You Can (2002) Citizenfour (2015) CSI: Cyber (2015) Enemy of the State (1998) Firewall (2006) Gattaca (1997) * Ghost in the Shell (1995) Hackers (1995) * Heartbreakers (2001) The Imitation Game (2014) I, Robot (2004) Johnny Mnemonic (1995) Jurassic Park (1993) * The KGB, the Computer and Me (1990) * - Youtube link The Lives of Others (2006) * Lo and Behold, Reveries of the Connected World (2016) The Matrix (1999) The Matrix Reloaded (2003) * The Matrix Revolutions (2003) Minority Report (2002) Mission: Impossible (1996) * Mr. Robot (2015) The Net (1995) * The Net 2.0 (2006) Ocean's Eleven (2001) Office Space (1999) * Person of Interest (2011) * Revolution OS (2001) The Social Network (2010) Sneakers (1992) * Superman III (1983) * Surrogates (2009) Swordfish (2001) Takedown (2000) Tron (1982) * WarGames (1983) * Slashdot "Best Hacker movie" poll (August 2001): https://slashdot.org/poll/683/best-hacker-flick This episode contains short except clips from some of these movies used under free use for demonstration. Provide feedback on this episode.
This show has been flagged as Explicit by the host. New hosts Welcome to our new hosts: Jim DeVore, Carmen-Lisandrette. Last Month's Shows Id Day Date Title Host 4544 Thu 2026-01-01 Uncommon Commands, Episode 2 Deltaray 4545 Fri 2026-01-02 YouTube Subscriptions 2025 #12 Ahuka 4546 Mon 2026-01-05 HPR Community News for December 2025 HPR Volunteers 4547 Tue 2026-01-06 Cheap Yellow Display Project Part 6: The speed and timing of Morse Trey 4548 Wed 2026-01-07 YouTube Subscriptions 2025 #13 Ahuka 4549 Thu 2026-01-08 [deprecated] Pomodoro Task Tool (pomotask.sh) candycanearter 4550 Fri 2026-01-09 Playing Civilization V, Part 7 Ahuka 4551 Mon 2026-01-12 “Elsbeth in IT: Since ’97” (Part 2) Elsbeth 4552 Tue 2026-01-13 Printer Conspiracy MrX 4553 Wed 2026-01-14 Nuclear Reactor Technology - Ep 4 Less Common Reactor Types Whiskeyjack 4554 Thu 2026-01-15 How I do todo Jim DeVore 4555 Fri 2026-01-16 HPR Beer Garden 8 - Belgian Christmas Ales Kevie 4556 Mon 2026-01-19 Nitro man! RC Cars operat0r 4557 Tue 2026-01-20 Why I prefer tar to zip Klaatu 4558 Wed 2026-01-21 YouTube Subscriptions 2025 #14 Ahuka 4559 Thu 2026-01-22 Enkele off line vertaaltools Ken Fallon 4560 Fri 2026-01-23 Arthur C. Clarke: Other Works, Part 2 Ahuka 4561 Mon 2026-01-26 A bit about Mission:Libre, a new project for 11-14 year olds in free software Carmen-Lisandrette 4562 Tue 2026-01-27 Software development doesn't end until it's packaged Klaatu 4563 Wed 2026-01-28 Nuclear Reactor Technology - Ep 5 Fast Reactors Whiskeyjack 4564 Thu 2026-01-29 MakeMKV error Archer72 4565 Fri 2026-01-30 HPR Beer Garden 9 - Barley Wine Kevie Comments this month These are comments which have been made during the past month, either to shows released during the month or to past shows. There are 20 comments in total. Past shows There are 6 comments on 5 previous shows: hpr4313 (2025-02-12) "Why I made a 1-episode podcast about a war story" by Antoine. Comment 3: Ken Fallon on 2026-01-23: "Spammer" hpr4424 (2025-07-17) "How I use Newsboat for Podcasts and Reddit" by Archer72. Comment 7: Ken Fallon on 2026-01-03: "Some podcast aggregators show ccdn.php as file name #321" Comment 8: Archer72 on 2026-01-05: "Re: download-filename-format for HPR podcasts" hpr4532 (2025-12-16) "Cheap Yellow Display Project Part 5: Graphical User Interface " by Trey. Comment 2: Ken Fallon on 2026-01-10: "Possible Graphics Library" hpr4536 (2025-12-22) "Welcome to the Linux Community" by Deltaray. Comment 6: Archer72 on 2026-01-05: "Re: Good talk CliMagic" hpr4543 (2025-12-31) "Nuclear Reactor Technology - Ep 3 Reactor Basics" by Whiskeyjack. Comment 2: Kevin O'Brien on 2026-01-01: "Really enjoying this series" This month's shows There are 14 comments on 9 of this month's shows: hpr4546 (2026-01-05) "HPR Community News for December 2025" by HPR Volunteers. Comment 1: Archer72 on 2026-01-06: "Nuclear Reactor series"Comment 2: Henrik Hemrin on 2026-01-07: "Linux" hpr4551 (2026-01-12) "“Elsbeth in IT: Since ’97” (Part 2)" by Elsbeth. Comment 1: operat0r on 2026-01-15: "White Male" hpr4552 (2026-01-13) "Printer Conspiracy" by MrX. Comment 1: candycanearter07 on 2026-01-24: "printer issues" hpr4554 (2026-01-15) "How I do todo" by Jim DeVore. Comment 1: brian-in-ohio on 2026-01-17: "Welcome"Comment 2: candycanearter07 on 2026-01-24: "good first show!" hpr4555 (2026-01-16) "HPR Beer Garden 8 - Belgian Christmas Ales" by Kevie. Comment 1: KarldaTech on 2026-01-16: "Christmas Ale" hpr4557 (2026-01-20) "Why I prefer tar to zip" by Klaatu. Comment 1: candycanearter07 on 2026-01-20: "interesting experiment" hpr4559 (2026-01-22) "Enkele off line vertaaltools " by Ken Fallon. Comment 1: ClaudioM on 2026-01-23: "Just What I Needed!"Comment 2: mnw on 2026-01-26: "Great Recommendations!" hpr4561 (2026-01-26) "A bit about Mission:Libre, a new project for 11-14 year olds in free software" by Carmen-Lisandrette. Comment 1: Henrik Hemrin on 2026-01-27: "Happy to learn about the project"Comment 2: candycanearter07 on 2026-01-28: "cool project" hpr4563 (2026-01-28) "Nuclear Reactor Technology - Ep 5 Fast Reactors" by Whiskeyjack. Comment 1: mnw on 2026-01-29: "Great Series"Comment 2: Whiskeyjack on 2026-01-29: "hpr4563 :: Nuclear Reactor Technology - Ep 5 Fast Reactors" Mailing List discussions Policy decisions surrounding HPR are taken by the community as a whole. This discussion takes place on the Mailing List which is open to all HPR listeners and contributors. The discussions are open and available on the HPR server under Mailman. The threaded discussions this month can be found here: https://lists.hackerpublicradio.com/pipermail/hpr/2026-January/thread.html Events Calendar With the kind permission of LWN.net we are linking to The LWN.net Community Calendar. Quoting the site: This is the LWN.net community event calendar, where we track events of interest to people using and developing Linux and free software. Clicking on individual events will take you to the appropriate web page. Provide feedback on this episode.
This show has been flagged as Clean by the host. With winter in full swing in the UK, Dave and Kevie continue their look at winter warmer ales with a review of a couple of British Barley Wine ales. Dave samples Ridgeway's Criminally Bad Elf whilst Kevie tries out a lmited release from Chiltern Brewery Roger Bodger's Barley WIne. Connect with the guys on Untappd: Dave Kevie The intro sounds for the show are used from: https://freesound.org/people/mixtus/sounds/329806/ https://freesound.org/people/j1987/sounds/123003/ https://freesound.org/people/greatsoundstube/sounds/628437/ Provide feedback on this episode.
HPR4564: MakeMKV error

HPR4564: MakeMKV error

2026-01-2905:16

This show has been flagged as Clean by the host. I am using MakeMKV version 1.18.2, the most updated version of the program USB Blu-ray drive BD-MLT UJ240AS reads a DVD or Blu-ray disc correctly Matshita SATA Blu-ray drive BDDVDRW CH20L stalls with ad DVD or Blu-ray disc Hewlett Packard The disc does not stall with Handbrake There is enough power, using an adapter that provides 12v Before: MakeMKV v1.18.1 linux(x64-release) stuck when launched How do I download older versions? MakeMKV old version repo makemkv-bin-1.17.7.tar.gz 2024-05-15 16:29 makemkv-oss-1.17.7.tar.gz 2024-05-15 16:31 After: Recorded with: Zoom H1 Essential Microphone Monitored with: soundcore V20i open ear headphones Provide feedback on this episode.
This show has been flagged as Clean by the host. Fast Reactors 03 Fast versus Slow Neutrons "Fast neutron" reactors are ones which use the "fast neutron" reaction. This is as opposed to "slow" or "thermal" neutron reactors which use a slow neutron reaction. Nearly all reactors in use today use a slow neutron reaction. 04 Moderators 06 No Moderator in Fast Neutron Reactors 07 Burners versus Breeders 08 Fast Fission Fuel Cycle 08 "Typical" Fuel 09 Other Methods 10 Reprocessing 11 Fuel Types 11 Oxide 12 Metal 13 Nitride 14 Carbide 15 Coolant 16 Liquid Sodium 18 Liquid Lead or Lead-Bismuth 19 Helium Gas 20 Molten Salt 21 History of Fast Neutron Reactors 21 Origins 22 Reasons for Developing Them 23 Reasons They are Still Being Developed 24 This is a Proven Technology 25 Plutonium Stockpiles 26 Pros and Cons of Fast Reactors If fast reactors are more expensive and difficult to operate than slow reactors, why is there any interest in them? 27 Pros Fast neutron reactors can use all of the uranium supply by converting the U-238 to plutonium as well as using the U-235. Slow neutron reactors can only use the U-235 plus converting a very small proportion of the U-238 to plutonium. This means that a given amount of fuel will go much further when used with a fast neutron reactor than a slow one. 28 Some (but not all) fast neutron reactors can produce more plutonium than they use. This extra plutonium can be used to make uranium-plutonium mixed oxide (or MOX) fuel to be used in slow reactors, or it can be used to power a thorium fuel cycle.  So the higher cost of the fast neutron reactors can be offset by having it produce fuel for several slow neutron or thorium reactors. 29 They can also use up or "burn" radioactive waste. That is, highly radioactive elements which are a byproduct of fuel use but not usable as fuel by themselves can be separated from the spent fuel and fed back into the reactor where the additional radiation will convert them into elements or isotopes which are either not radioactive or which are otherwise easier to dispose of. 30 Cons There are a number of cons however, as otherwise there would be a lot more fast neutron reactors in the world. Since water, even "light" water, is a moderator, fast neutron reactors cannot use water as a coolant. Other alternative coolants must be used, and these complicate the design of the reactor and make it more difficult to operate. 31 Alternative compatible coolants may be corrosive, and so new materials may need to be developed for both the reactor vessel and the fuel cladding. Alternative coolants are often opaque, making it difficult to inspect the reactor. The fuel cycle requires reprocessing spent fuel, which means that reprocessing facilities have to be set up, which is an additional expense. 32 Fast neutron reactors were primarily developed on the premise that uranium supplies were limited and would soon become very expensive. However new very large and very high grade uranium deposits were discovered in Canada, Australia, and Kazakhstan, causing uranium prices to fall rather than rise. As a result it is much cheaper to operate a once-through fuel cycle than to build fast neutron reactors. 33 Future Prospects Currently fast neutron reactors are not economically competitive with slow neutron reactors for electric power generation so there isn't a lot of interest from prospective customers. Originally interest in them was driven by a belief that the world would run short of uranium. However, higher uranium prices sparked increased mineral exploration which resulted in finding large high grade reserves of low cost uranium, undercutting the need for economizing on its use. 34 There is still ongoing R&D though as they offer several other use cases. One is to get rid of radioactive waste elements by turning them into non-radioactive or less radioactive isotopes or elements. The other is to provide a supply of plutonium for fuelling thorium reactors. 35 Conclusion This has been a short overview of fast neutron reactors, including their history, uses, and underlying design features. In the next episode we will describe the use of thorium in nuclear power, including what thorium is, how it differs from uranium, and what sort of reactors can use it. Provide feedback on this episode.
This show has been flagged as Clean by the host. Development isn’t over until it’s packaged Most software development I’ve done has been utilities for highly specific workflows. I’ve written code to ensure that metadata for a company’s custom file format gets copied along with the rest of the data when the file gets archived, code that ensures a search field doesn’t mangle input, lots of Git hooks, file converters, parsers, and of course my fair share of dirty hacks. Because most software projects I work on are designed for a specific task, very few of them have required packaging. My utilities have been either integrated into a larger code base I’m not responsible for, or else distributed across an infrastructure by an admin. It’s like a magic trick, which has made my life conveniently easier but, as magic does, it has also tricked me into thinking that my development work is done once I can prove that my code does its job. The reality is that code development isn’t actually done until you can deliver it to your users in a format they can install. I don’t think I’m alone in forgetting that software delivery is the real final product. There are many reasons some developers stop short of providing an installable package for the code they’ve worked on for weeks or months or years. First of all, packaging is work, and after writing and troubleshooting code for months, sometimes you just want your work to be over just as soon as everything functions as expected. Secondly, there are a lot of software package formats out there, regardless of what platform you’re delivering to. However, I view packaging as part of quality assurance. There are lots of benefits you gain by packaging your code into an installer, and you don’t have to target every package format. In fact, you get the benefits of packaging by creating just one package. Checking for consistency When you package your code as an installable file, whether it’s an RPM file or a Bash script or a Flatpak or AppImage or EXE or MSI or anything else, you are checking your code base for consistency. Pick whatever package format you’re most comfortable with, or the one you think represents the bulk of your target audience, and you’re sure to find that the package tooling expects to be automated. Nobody wants to start packaging from scratch every time they update code, so naturally packaging tools are designed to be configured once for a specific code base and then to create updated packages each time the code base is updated. If you’re building a package for your project and discover that you have to manually intervene, then you’ve discovered a bug in your code. Imagine that you’ve got a project repository with a name in camel-case. You hadn’t noticed before, but your code refers to itself in a mix of lowercase and camel-case. Your package build grinds to a halt because a variable used by the packaging tools suddenly can’t find your code base because it was set to a lowercase title but the archive of your code uses camel-case. If this happens to you, it’s also going to happen for every software packager trying to help you deliver your project to their users. Fix it for yourself, and you’ve fixed it for everyone. Discover surprise dependencies For decades, one of the most common problems of software troubleshooting has been the phrase “well, it works on my machine.” No matter how many tools we developers have at our disposal to make it easy to build and run software on a clean system, it’s still common to accidentally deliver software with surprise dependencies. It’s easy to forget to revert to a clean snapshot in a virtual machine, or to use a container that just happens to have a more recent version of a library than you’d realised, or to get the path of an important executable wrong in a script, or to forget that not all computers ship with a thing you take for granted. Not all packaging tools are immune to this problem, but very robust ones (like RPM and DEB, Flatpak, and AppImage) are. I can’t count the times I’ve tried to deliver an RPM only to be reminded by rpmbuild that I haven’t included the -devel version of a dependency (many Linux distributions separate development libraries from binaries.) You may not literally fix every problem with dependency management by building a single package, but you can clearly identify what your code requires. It only takes a single warning from your packaging tool for you to add a note to other packagers about what they must include in their own builds. As an additional bonus, it’s also a good reminder to double check the licenses your project is using. In the haze of desperate hacking to get something to just-work-already, it’s helpful to get a gentle reminder that you’ve linked to a library with a different license than everything else. Few packaging tools (if any?) detect licensing requirements directly, but sometimes all it takes is a reminder that you’re using a library that comes from a non-standard repo for you to remember to review licensing. Every package is an example package Once you’ve packaged your code once, you create an example for everyone coming to your project to turn it into a package of their own. It doesn’t matter whether your example package is an RPM or a DEB or just a TGZ for a front-end like SlackBuild or Arch’s AUR, it’s the interaction between a packaging system and the input script that counts. Even a novice package maintainer is likely to be able to reverse engineer a packaging script enough to reuse the same logic for their own package. Here’s the build and install section of the RPM for GNU Hello: %prep %autosetup %build %configure make %{?_smp_mflags} %install %make_install %find_lang %{name} rm -f %{buildroot}/%{_infodir}/dir %post /sbin/install-info %{_infodir}/%{name}.info %{_infodir}/dir || : Here’s the GNU Hello build script for Arch Linux: source=(https://ftp.gnu.org/gnu/hello/$pkgname-$pkgver.tar.gz) md5sums=('5cf598783b9541527e17c9b5e525b7eb') build(){ cd "$pkgname-$pkgver" ./configure --prefix=/usr make } package(){ cd "$pkgname-$pkgver" make DESTDIR="$pkgdir/" install } There are differences, but you can see the shared logic. There are macros or functions that abstract some common steps of the build process, there are variables to ensure consistency, and they both benefit from using automake as provided by the source code. Armed with these examples, you could probably write a DEB package or Flatpak ref for GNU Hello in an afternoon. Package your code at least once Packaging is quality assurance. Even though a packaging system is really just a front-end for whatever build system your code uses anyway, the rigour of creating a repeatable and automated process for delivering your project is a helpful exercise. It benefits your project, and it benefits the people eager to deliver your project to other users. Software development isn’t over until it’s packaged.Shownotes taken from https://www.both.org/?p=13264Provide feedback on this episode.
This show has been flagged as Clean by the host. Mission:Libre is a new project for 11 to 14-year-old kids who're interested in free software. Mission:Libre website: https://missionlibre.org Carmen's e-mail address: carmen@missionlibre.org "Libre!" issue 0: https://missionlibre.org/files/libre0.pdf Mission:Libre's Liberapay: https://liberapay.com/MissionLibreProvide feedback on this episode.
This show has been flagged as Clean by the host. This brings us to a look at some of Arthur C. Clarke's other stories, A Time Odyssey (1951), Tales From the White Hart (1957), The Nine Billion Names of God (1954), The Star (1955), Dolphin Island (1964), and A Meeting With Medusa (1971. These stories will wrap up our look at Clarke's Science Fiction and we have seen a lot of good stuff here. And as a final note, we cover CLarke's Three Laws. Arthur C. Clarke: Other Works, A Time Odyssey A collaboration between two of science fiction’s best authors: what could possibly go wrong? Well, something went wrong. This series is not bad, but I hesitate to describe it as good. This series was described by Clarke as neither a prequel nor a sequel, but an “orthoquel”, a name coined from “orthogonal”, which means something roughly like “at right angles”, though it is also used in statistics to denote events that are independent and do not influence each other. And in relativity theory Time is orthogonal to Space. And in multi-dimensional geometry we can talk about axes in each dimension as orthogonal to all of the others. It is something I can’t picture, being pretty much limited to three dimensions, but it can be described mathematically. It is sort of like the 2001 series, but not really. It has globes instead of monoliths. And the spheres have a circumference and volume that is related to their radius not by the usual pi, but by exactly three. Just what this means I am not sure, other than they are not sphere’s in any usual sense of the word. In this story these spheres seem to be gathering people from various eras and bringing them to some other planet which gets christened “Mir”, though not in any way to the Russian Space Station. It is a Russian word that can mean “peace”, “world”, or “village”. I have seen it used a lot to refer to a village in my studies of Russian history. Anyway, the inhabitants include two hominids, a mother and daughter, a group of British Redcoats, Mongols from the Genghis Khan era, a UN Peacekeeper helicopter, a Russian space capsule, an unknown Rudyard Kipling, the army of Alexander The Great… Well at least they have lots of characters to throw around. They end up taking sides and fighting each other. In the end several of the people are returned to Earth in their own time. But the joke is on them. The beings behind the spheres are call themselves The Firstborn because they were the first to achieve sentience. They figure that best way for them to remain safe is to wipe out any other race that achieves sentience, making them to polar opposite of the beings behind the monoliths in 2001, for whom the mind is sacred. Anyway, the Firstborn have arranged for a massive solar flare that will wipe out all life on Earth and completely sterilize the planet, but conveniently it will happen in 5 years, leaving time for plot development. Of course the people of Earth will try to protect themselves. Then in the third book of the series an ominous object enters the solar system. This is of course a callback to the Rama object. It is like they wanted to take everything from the Rama series and twist it. While I love a lot of Clarke’s work and some of Baxter’s as well, I think this is eminently skippable. The two of them also collaborated on the final White Hart story, which isn’t bad Other Works Tales from the White Hart This collection of short stories has a unity of the setting, a pub called White Hart, where a character tells outrageous stories. Other characters are thinly disguised science fiction authors, including Clarke himself. Clarke mentions that he was inspired to do this by the Jorkens stories of Lord Dunsany, which are also outrageous tall tales, but lacking the science fictions aspects of Clarke’s stories. Of course this type of story has a long history, in which we would do well to mention the stories of Baron Munchausen, and of course the stories of L. Sprague de Camp and Fletcher Pratt as found in Tales from Gavagan’s Bar. And Spider Robinson would take this basic idea and turn it into a series of books about Callahan’s Place. Stories of this type are at least as much Fantasy as anything, but quite enjoyable, and I think I can recommend all of these as worth the time to while away a cold winter’s evening while sitting by a warm fire with a beverage of choice. The Nine Billion Names of God This short story won a retrospective Hugo in 2004 as being the best short story of 1954. The idea is that a group of Tibetan monks believe that the purpose of the universe is to identify the nine billion names of God, and once that has been done the universe will no longer have a purpose and will cease to exist. They have been identifying candidates and writing them down, but the work is very slow, so they decide that maybe with a little automation they can speed it up. So they get a computer (and in 1954, you should be picturing a room-sized mainframe), and then hire some Western programmers to develop the program to do this. The programmers don’t believe the monks are on to anything here, but a paycheck is a paycheck. They finish the program and start it running, but decide they don’t want to be there when the monks discover their theory doesn’t work, so they take off early without telling anyone, and head down the mountain. But on the way, they see the stars go out, one by one. The Star This classic short story won the Hugo for Best Short Story in 1956. The story opens with the return of an interstellar expedition that has been studying a system where the star went nova millennia ago. But the expedition’s astrophysicist, a Jesuit Priest, seems to be in a crisis of faith. And if you think it implausible that a Jesuit Priest could also be an astrophysicist, I would suggest you look into the case of the Belgian priest Georges Lemaître, who first developed the theory of the Big Bang. Anyway, in the story, they learn that this system had a planet much like Earth, and it had intelligent beings much like Earth, who were peaceful, but in a tragic turn of events they knew that their star was going to explode, but they had no capability of interstellar travel. So they created a repository on the outermost planet of the system that would survive the explosion, and left records of their civilization. And when the Jesuit astrophysicist calculated the time of the explosion and the travel time for light, he is shaken: “[O]h God, there were so many stars you could have used. What was the need to give these people to the fire, that the symbol of their passing might shine above Bethlehem?” Dolphin Island This is a good Young Adult novel about the People of the Sea, who are dolphins. They save a young boy who had stowed away on a hovership that subsequently had crashed, and because no one knew about him he was left among the wreckage when the crew takes off in the life boats. And from here it is the typical Bildungsroman you find in most Young Adult novels. The dolphins bring him to an island, where he becomes involved with a research community led by a professor who is trying to communicate with dolphins. He learns various skills there, survives dangers, and in the end has to risk his life to save the people on the island. If you have a 13 year old in your house, this is worth looking for. A Meeting With Medusa This won the 1972 Nebula Award for Best Novella. It concerns one Howard Falcon, who early in the story has an accident involving a helium-filled airship, is badly injured, and requires time and prosthetics to heal. But then he promotes an expedition to Jupiter that uses similar technology, a Hot-Hydrogen balloon-supported aircraft. This is to explore the upper reaches of Jupiter’s atmosphere, which is the only feasible way to explore given the intense gravity of this giant planet. Attempting to land on the solid surface would mean being crushed by the gravity and air pressure, so that is not possible. The expedition finds there is life in the upper clouds of Jupiter. Some of it is microscopic, like a kind of “air plankton” which is bio-luminescent. But there are large creatures as well, one of which is like jellyfish, but about a mile across. This is the Medusa of the title. Another is Manta-like creature, about 100 yards across, that preys on the Medusa. But when the Medusa starts to take an interest on Falcon’s craft, he decides to get out quick for safety’s sake. And we learn that because of the various prosthetics implanted after the airship accident Falcon is really a cyborg with much faster reactions than ordinary humans. As we have discussed previously, Clarke loved the sea, and in this novella he is using what he knows in that realm to imagine a plausible ecology in the atmosphere of Jupiter. Of course when he wrote this novella no one knew about the truly frightening level of radiation around Jupiter, but then a clever science fiction writer could come up with a way to work around that. Clarke’s Three Laws Finally, no discussion of Arthur C. Clarke can omit his famous Three Laws. Asimov had his Three Laws of Robotics, and Clarke had his Three Laws of Technology. When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong. The only way of discovering the limits of the possible is to venture a little way past them into the impossible. Any sufficiently advanced technology is indistinguishable from magic. This concludes our look at Arthur C. Clarke, the second of the Big Three of the Golden Age of Science Fiction. And that means we are ready to tackle the Dean of Science Fiction, Robert A. Heinlein. Links: https://en.wikipedia.org/wiki/A_Time_Odyssey https://en.wikipedia.org/wiki/Tales_from_the_White_Hart https://en.wikipedia.org/wiki/Joseph_Jorkens https://en.wikipedia.org/wiki/Baron_Munchausen https://en.wikipedia.org/wiki/Tales_from_Gavagan%27s_Bar https://en.wikipedia.org/wiki/Callahan%2
This show has been flagged as Clean by the host. Offline Translator tools Translate text offline LocalTranslate is an offline translation application that uses Firefox's neural translation models (from the mozilla/firefox-translations-models project) to perform high-quality translations locally on your device. Note: LocalTranslate is not affiliated with The Mozilla Foundation in any way. Links LocalTranslate by Shriram Ravindranathan on flathub.org GPL-3.0 license Source Code Offline Translator - On-device translation of text and images A translator app that performs on-device translation of text and images without sending your data to external servers. Features: On-device translation using Mozilla's translation models Transliteration of non-latin script OCR (Optical Character Recognition) for translating text in images Automatic language detection Image translation overlay that preserves original formatting Support for multiple language pairs No internet required for translation once models are downloaded All translation happens locally Links Offline Translator by David Ventura on F-droid [GNU General Public License v3.0 or later]( https://spdx.org/licenses/GPL-3.0-or-later.html Source Code hpr3315 :: tesseract optical character recognition Provide feedback on this episode.
This show has been flagged as Clean by the host. I am subscribed to a number of YouTube channels, and I am sharing them with you. Links: https://www.youtube.com/@bulwarkmedia https://www.youtube.com/@thefabfaux https://www.youtube.com/@TheGreatWar https://www.youtube.com/@TheHistoryGuyChannel https://www.youtube.com/@TheImmedFamily https://www.youtube.com/@TheKoreanWarbyIndyNeidell https://www.youtube.com/@TheLanguageTutor https://www.youtube.com/@TheLincolnProject https://www.youtube.com/@planetarysociety https://www.youtube.com/@TheSaxyGamer https://www.youtube.com/@JSHIPLIFE https://www.youtube.com/@thespiffingbrit https://www.youtube.com/@AmyShiraTeitel https://www.youtube.com/@thefrielsisters https://www.palain.com/ Provide feedback on this episode.
This show has been flagged as Clean by the host. Why I prefer tar to zip I love having choices when it comes to computing, and especially in the world of open source we’re spoilt when it comes to archiving files. There’s TAR, ZIP, GZIP, BZIP2, XZ, 7Z, AR, ZOO, and more. Of all compression formats, it seems that ZIP has gained ubiquity. It’s the one you can use to archive and extract data on nearly every system, including Linux, UNIX, FreeDOS, Android, Windows, macOS, and more. The problem is, ZIP isn’t the best tool for the job of archival. Here’s why I use TAR instead of ZIP whenever possible. Each archiving format has an associated command, such as tar, zip, gzip and gunzip, xz, and so on. In terms of compression, they all tend to be basically the same at this point. You might save a few kilobytes or megabytes with one compression algorithm given a specific combination of file types, but it’s fair to say that they all result in broadly similar results. Where they differ is in what each command makes available, and what each file format retains. The tar and zip command showdown At first glance, tar and zip are similar in capability. By default, the tar command generates an archive that’s not compressed. It’s just a single file object that contains smaller file objects within it. The resulting object is basically the same size as the sum of its parts: $ tar --create --file archive.tar pic.jpg file.txt $ ls -lG -rw-r--r-- 1 tux 46049280 Jan 7 10:55 archive.tar -rw-r--r-- 1 tux 45965374 Jan 7 10:55 file.txt -rw-r--r-- 1 tux 77673 Jan 7 08:34 pic.jpg You can use the -0 option to simulate this with the zip command: $ zip -0 archive.zip pic.jpg file.txt adding: pic.jpg (stored 0%) adding: file.txt (stored 0%) $ ls -lG $ ls -lG -rw-r--r-- 1 tux 46049280 Jan 7 10:55 archive.tar -rw-r--r-- 1 tux 46043355 Jan 7 10:57 archive.zip -rw-r--r-- 1 tux 45965374 Jan 7 10:55 file.txt -rw-r--r-- 1 tux 77673 Jan 7 08:34 pic.jpg The most common use case of each command, however, definitely includes compression. Level of compression The balance in choosing either an algorithm (in the case of tar) or a compression level (in the case of zip is between compression speed and size. In theory, the slower you let the command compress, the smaller the resulting archive. The faster the compression, the bigger the archive. Both commands strive to provide you with some control over this. By default (without the -0 option), the zip command also compresses the archive it has created. You can adjust the amount of compression with an option ranging from -0 to -9. The default level is -6. To add compression to the tar command, you can either use a separate command entirely to compress the resulting TAR file, or you can one of several options to choose what compression algorithm gets applied to the TAR file during its creation. Here’s an incomplete list: -z or --gzip: Filters the archive through gzip -j or --bzip2: Filters the archive through bzip2 -J or --xz: Filters the archive through xz --lzip: Filters the archive through lzip -Z or --compress: Filters the archive through compress --zstd: Filters the archive through zstd --no-auto-compress: Prevents tar from using the archive suffix to determine the compression program so you can specify one (or not) yourself Decoupling the process of archiving from compression makes sense to me. While the zip command is stuck with basically the same old algorithm year after year, a TAR archive can be compressed using whatever compression algorithm you think is best. In some cases, you might make that determination based on the type of data you’re compressing, or you might be limited to the capabilities of your target system, or you might just want to test a hot new compression algorithm. Here’s what the zip command does with a 44 MB text file and a JPEG file, at maximum compression: $ zip -9 archive.zip file.txt pic.jpg adding: file.txt (deflated 90%) adding: pic.jpg (deflated 14%) $ ls -lG -rw-r--r-- 1 tux 4.4M Jan 7 11:17 archive.zip -rw-r--r-- 1 tux 44M Jan 7 10:55 file.txt -rw-r--r-- 1 tux 76K Jan 7 08:34 pic.jpg A compressed archive of 4.4 MB down from a little more than 44 MB isn’t bad. Similarly, the tar command with the --gzip option produces a 4.5 MB archive. However, filtering tar through --xz makes a significant improvement: $ tar --create --xz --file archive.tar.xz file.txt pic.jpg $ ls -lG -rw-r--r-- 1 tux users 3.3M Jan 7 11:17 archive.tar.xz -rw-r--r-- 1 tux users 44M Jan 7 10:55 file.txt -rw-r--r-- 1 tux users 76K Jan 7 08:34 pic.jpg At 3.3 MB, it seems that a newer compression algorithm has outperformed ZIP, at least in this particular test. I’m the first to admit that compression tests are subject to many variables, so it’s not globally significant that XZ has done better than ZIP in this one example. With some experimentation, I could [probably] devise a test that gets better results from ZIP. However, this example does demonstrate that it’s useful having an archive tool that is modular enough to allow for the development of new algorithms. Output manipulation When you extract data from a TAR or ZIP archive, you can choose to either extract specific files or to extract everything all at once. I believe it’s most common to extract everything, because that’s the default behaviour on major desktops like GNOME and macOS. With both the tar and unzip commands, even when you choose to extract everything all at once, you still have a choice of where to put the files you’ve extracted. By default, both the tar and unzip commands extract all files into the current directory. If the archive itself contains a directory, then that directory serves as a “container” for the extracted files. Otherwise, the files appear in your current directory. This can get messy, but it’s a common enough problem that Linux and UNIX users call it a “tarbomb” because it sometimes feels like an archive has exploded and left file shrapnel in its wake. However, a tarbomb (or zipbomb) isn’t inherently bad. It’s a valid use case when you want to essentially overlay updated or additional files into an existing file system. For example, suppose you have a website consisting of several PHP files across several directories. You can take a copy of the site to your development machine to make updates, and then create an archive of the files you’ve updated. Extract the archive on your web server, and each new version of any file is extracted exactly where it originated from because both tar and unzip retain the filesystem’s structure. I use this feature when doing dot-release updates of several different content management systems, and it makes maintenance pleasantly simply. Both the unzip and tar commands provide an option to change directory before extraction so you can store an archive in one directory but send extracted files to a different location. Use the --directory option with the tar command: $ mkdir mytar $ tar --extract --file archive.tar.xz --directory ./mytar $ ls ./mytar file.txt pic.jpg Use the -d option with unzip: $ mkdir myzip $ unzip archive.zip -d ./myzip $ ls ./myzip file.txt pic.jpg The feature unzip doesn’t have is the ability to drop directories from the archive before extraction. For example, suppose you want to extract files directly into myzip, but you’ve been given an archive containing a leading directory called chaff: $ unzip archive+chaff.zip -d ./myzip $ ls ./myzip chaff $ ls ./myzip/chaff file.txt pic.jpg You don’t want chaff, but there’s no option in unzip to skip it. Frustratingly, the unzip command essentially encourages this anti-pattern. In order to avoid delivering a zipbomb to someone, you thoughtfully nest your files in a useless folder. But by nesting everything in a useless folder, you’ve also prevented your user from extracting only the files required. The tar command solves this problem elegantly. You can protect your users from a tarbomb by nesting your files in a useless directory because tar allows any user to skip over any number of leading directories. $ tar --extract --strip-components=1 \ --file archive+chaff.tar.xz --directory ./mytar $ ls ./mytar file.txt pic.jpg Permission and ownership The ZIP file format doesn’t preserve file ownership. The TAR file format does. You might not notice this when using ZIP or TAR archives just on your own personal systems. Once a file is extracted, you own the file. However, using tar as a superuser or with the --same-owner option extracts each file with the same ownership it had when archived, assuming the same user and group is available on the system. There’s no option for that with unzip command because the ZIP file format doesn’t track ownership. The zip command can preserve file permissions, but again tar offers a lot more flexibility. The --same-permissions, --no-same-permissions, and --mode options let you control the permissions assigned to archived files. Better archiving with tar It’s easy to use either ZIP or TAR interchangeably, because for most general purpose activities their default behaviour is similar and suitable. However, if you’re using archives for mission critical work involving disparate systems and a diverse set of people, TAR is the technically superiour choice. Whether TAR is the “correct” choice depends entirely on your target audience, because there’s no doubt that ZIP has greater support. But all things being equal, TAR is the archive format and tar is the archive command I prefer. Show notes taken from https://www.both.org/?p=13268Provide feedback on this episode.
This show has been flagged as Clean by the host. Today it's a special Christmas episode, it's such a kind of part for the RC cars. So we're going to have to talk to them about the Metro cars. The Metro cars are RC cars that run off of this like 20% oil gas thing. So the oil is in the gas, it has a little motor and you can pay you know $800 for a motor or you can pay, you know, the 50 bucks for a motor. https://traxxas.com/products/models/electric/rustler-bl2s Provide feedback on this episode.
This show has been flagged as Clean by the host. Dave and Kevie bring the HPR listeners another festive edition of the Beer Garden, with the focus turning to Belgian Christmas ales. Kevie discovered a scan of the original advert in the Journal De Charleroi from 1896 Translation: Christmas beer has arrived at the Arabian horse and the globe, these two establishments so famous for Anglaise beers. Go and taste it, because Christmas is only sold for a short time. In this episode Dave samples Baby Jesus by Brouwerij 't Verzet and Kevie tries out La Binchoise Speciale Noel . Connect with the guys on Untappd: Dave Kevie The intro sounds for the show are used from: https://freesound.org/people/mixtus/sounds/329806/ https://freesound.org/people/j1987/sounds/123003/ https://freesound.org/people/greatsoundstube/sounds/628437/ Provide feedback on this episode.
loading
Comments (2)

Robert Naramore Jr.

awesome

Oct 29th
Reply

Denise Wiesner

Its an interesting topic you bring up. personally I am appalled by scarecrow tactics. I'd like to offer a different view. There is lots wrong with capitalism. First thing is that capitalists believe their system is the only answer. The hangover after our last industrial revolution gave us shorter working days, safety rules and employee rights at work. Currently there is lots of demand out there for sabbaticals or people taking a break. so hell yeah, give me a robot who does my job so I can recover from stress, spend time with my children or travel, do volunteer work. Why do we doubt Basic income? currently those breaks are only available for the rich or singles or childless. Have you seen a happy cashier? Have you heard a mine worker shouting: yes - let's continue ruining my lung instead of giving me proper training so I can work in a solar panel farm. and for the doctors! I have met so many who were an utter waste of my time. yes, give me the Watson system. I had to retrain in my job

Oct 19th
Reply