DiscoverHacker Public Radio
Claim Ownership
Hacker Public Radio
Author: Hacker Public Radio
Subscribed: 823Played: 26,879Subscribe
Share
© Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License
Description
Hacker Public Radio is an podcast that releases shows every weekday Monday through Friday. Our shows are produced by the community (you) and can be on any topic that are of interest to hackers and hobbyists.
235 Episodes
Reverse
I have been struggling with my body weight since I was 35, and I’m
now 60.
I know that not all listeners are familiar with the kilogram as unit of
measurement, but we can use the BMI (Body Mass Index) formula to discuss
this. It should be somewhere between 22 and 25 and mine has been 33 for
a long time. A very long time. No matter what I tried.
Yes, I tried some diets but they only work if you keep doing them. So
if something does not become normal or easy than at some inevitable
point you will stop and gain weight again.
Yes, they talk about changing your life style but any change that is too
drastic is bound to fail in the end.
And then recently I read this book. This absolutely changed my life
and that is why I am so motivated to tell you all about it.
Book obesity code, Jason Fung, a Canadian nephrologist (kidney
specialist).
He is also a functional medicine advocate who promotes a
low-carbohydrate high-fat diet and intermittent fasting. But we come
back to that later.
Not another diet hype. That is an industry on its own.
This is scientific stuff. With lots of links to research papers.
With large groups and thoroughly peer reviewed.
And this does not mean that this story is for everyone.
There exist other medical reasons why people gain weight.
But, assuming most people start out in life being healthy, then most
people gaining weight are not ill.
So, if you gain weight, consult your doctor first to rule out any
medical reasons.
Jason Fung noticed that practice didn't match with theory.
Everybody who is given insulin gains weight.
Even diabetes type 2 people.
There are even several scientific studies that proves this. Give people
insulin and they will gain weight.
So what if insulin is the culprit for gaining weight?
Insulin is a hormone. Its job is to send signals through the body.
Its use is to allow body cells to absorb nutrients in the blood
stream.
Every time you eat the insulin peaks and subsides normally three
times a day.
Body process called gluconeogenesis. Making fat in the liver for one
day storage.
If you eat the body makes insulin. That is normal.
If you eat more, the body makes more insulin.
Body cells adjust to the higher level and become tone deaf to insulin:
Insulin resistant.
This means next time the insulin level needs to be higher.
And higher levels of insulin mean you will gain weight.
If you eat sugar, it is so easy to break down that it goes
immediately into storage, e.g. body fat.
The thing is, wheat is chemically a long string of sugars. So the
body will break it down into sugar and send that too to storage.
And almost any food we buy these days contains sugar.
Except unprocessed foods like vegetables.
How to lose weight? Well, the body needs to access the fat in
storage. So we need to extend not eating until the liver has run dry of
the daily dose of liver fat.
This is very easy. Just extend the daily period that you do not
eat.
When do you not eat? When you sleep. So, skip breakfast. The name says
it all, you are breaking your fast.
Drink some coffee (no sugar of course), or tea, or water and try to
start eating later in the day.
And another word for not eating is fasting. But it is a voluntary
fast!
So I tried this for one day. Skip breakfast and try to eat it at
noon. I mean, what could possibly go wrong, right? The next day I had
lost some weight. And it was sooo easy! I could say 300 grams but again,
your mileage may vary or you have no clue what one gram is, let alone
300. But that is not the point. The point is that I lost weight! And to
me this has been super easy.
So the solution turns out to be:
extend the time your insulin levels are low. 16, 24 or 36
hours.
eat as little sugar as possible.
Which brings me to food categories.
carbohydrates. Sugars, wheat, flour
proteins.
fats. Oil, etc.
vitamins and minerals
fibers
Average digestion times of
carbohydrates. 30 minutes. After which you will be hungry again
proteins: 3-5 hours
fats. Oil, etc. up to 40 hours
vitamins and minerals. needed
fibers. Leave the body
How has all this theory changed my life and diet?
I try to start eating at noon, sometimes an hour earlier
I eat as little carbohydrates as possible. Little to no bread,
definitely no sugar, avoid artificial sweeteners
my meal at noon is most of the times quark with some fruit for
flavoring
evening food:
Vegetables are good.
Some meat is good.
I try to avoid desserts
No eating between meals (this will cause an extra insulin peak I
want to avoid)
Since I started 2 month ago I have on average lost 4 kilograms. It
could have been more but then there’s the occasional dinner with friends
and what is bad, but soo good, is unavoidable.
So, some other stuff that is good to know:
What’s that about exercising?
Well, we humans, excel at walking and thus wearing out our prey. So
walking is good. Everyday for half an hour is great.
Doing an intensive workout for a minimum of 10 minutes per week is
good to keep our cardiovascular system and our brain up to speed
Can you compensate cookies with sports. Well, every cookie would
take you about 2.5 hours of intensive sports, so no, you can not
compensate bad eating with sports.
What’s with the calories in are calories out? Studies have proven
that this is a false claim. It just doesn't work that way.
What about stress. Well, it turns out that stress leads to heightened
levels of the hormones adrenaline and cortisol. And when cortisol rises,
so too does the insulin levels in your body. So, this simply means that
stress will lead to weight gain.
Can I simply drink diet sodas. Well, bummer there, because although
it diet sodas do not contain calories nor sugars, they will result in a
rise in your insulin level, so they are not good for loosing weight.
[The Diary Of A CEO with Steven Bartlett] The Fasting Doctor:
“Fasting Cures Obesity!”, This Controversial New Drug Melts Fat, Fasting
Fixes Hormones! Skip Breakfast!
https://podcasts.apple.com/gb/podcast/the-fasting-doctor-fasting-cures-obesity-this/id1291423644
Jason Fung YouTube channel, https://www.youtube.com/watch?v=8RuWp3s6Uxk
I hope you found this explanation helpful. Have a nice day.
Introduction
This is the start of a short series about the JSON data format, and how
the command-line tool jq
can be used to process such data. The plan is to make an open series to
which others may contribute their own experiences using this tool.
The jq command is described on the GitHub page as follows:
jq is a lightweight and flexible command-line JSON processor
…and as:
jq is like sed for JSON data - you can use
it to slice and filter and map and transform structured data with the
same ease that sed, awk, grep and
friends let you play with text.
The jq tool is controlled by a programming language
(also referred to as jq), which is very powerful. This
series will mainly deal with this.
JSON (JavaScript Object
Notation)
To begin we will look at JSON itself. It is defined on
the Wikipedia page
thus:
JSON is an open standard file format and data
interchange format that uses human-readable text to store and transmit
data objects consisting of attribute–value pairs and arrays (or other
serializable values). It is a common data format with diverse uses in
electronic data interchange, including that of web applications with
servers.
The syntax of JSON is defined by RFC 8259 and by
ECMA-404.
It is fairly simple in principle but has some complexity.
JSON’s basic data types are (edited from the Wikipedia page):
Number: a signed decimal number that may contain a
fractional part and may use exponential E notation, but cannot include
non-numbers. (NOTE: Unlike what I said in the audio,
there are two values representing non-numbers: 'nan' and
infinity: 'infinity'.
String: a sequence of zero or more Unicode characters.
Strings are delimited with double quotation marks and support a
backslash escaping syntax.
Boolean: either of the values true or
false
Array: an ordered list of zero or more elements, each of
which may be of any type. Arrays use square bracket notation with
comma-separated elements.
Object: a collection of name–value pairs where the names
(also called keys) are strings. Objects are delimited with curly
brackets and use commas to separate each pair, while within each pair
the colon ':' character separates the key or name from its
value.
null: an empty value, using the word
null
Examples
These are the basic data types listed above (same order):
42
"HPR"
true
["Hacker","Public","Radio"]
{ "firstname": "John", "lastname": "Doe" }
null
jq
From the Wikipedia page:
jq was created by Stephen Dolan, and released in October
2012. It was described as being “like sed for JSON data”. Support for
regular expressions was added in jq version 1.5.
Obtaining jq
This tool is available in most of the Linux repositories. For
example, on Debian and Debian-based releases you can install it
with:
sudo apt install jq
See the download
page for the definitive information about available versions.
Manual for jq
There is a detailed manual describing the use of the jq
programming language that is used to filter JSON data. It can be found
at https://jqlang.github.io/jq/manual/.
The HPR statistics page
This is a collection of statistics about HPR, in the form of JSON
data. We will use this as a moderately detailed example in this
episode.
A link to this page may be found on the HPR Calendar page
close to the foot of the page under the heading Workflow.
The link to the JSON statistics is https://hub.hackerpublicradio.org/stats.json.
If you click on this you should see the JSON data formatted for you
by your browser. Different browsers represent this in different
ways.
You can also collect and display this data from the command line,
using jq of course:
$ curl -s https://hub.hackerpublicradio.org/stats.json | jq '.' | nl -w3 -s' '
1 {
2 "stats_generated": 1712785509,
3 "age": {
4 "start": "2005-09-19T00:00:00Z",
5 "rename": "2007-12-31T00:00:00Z",
6 "since_start": {
7 "total_seconds": 585697507,
8 "years": 18,
9 "months": 6,
10 "days": 28
11 },
12 "since_rename": {
13 "total_seconds": 513726307,
14 "years": 16,
15 "months": 3,
16 "days": 15
17 }
18 },
19 "shows": {
20 "total": 4626,
21 "twat": 300,
22 "hpr": 4326,
23 "duration": 7462050,
24 "human_duration": "0 Years, 2 months, 27 days, 8 hours, 47 minutes and 30 seconds"
25 },
26 "hosts": 356,
27 "slot": {
28 "next_free": 8,
29 "no_media": 0
30 },
31 "workflow": {
32 "UPLOADED_TO_IA": "2",
33 "RESERVE_SHOW_SUBMITTED": "27"
34 },
35 "queue": {
36 "number_future_hosts": 7,
37 "number_future_shows": 28,
38 "unprocessed_comments": 0,
39 "submitted_shows": 0,
40 "shows_in_workflow": 15,
41 "reserve": 27
42 }
43 }
The curl utility is useful for collecting information
from links like this. I have used the -s option to ensure
it does not show information about the download process, since it does
this by default. The output is piped to jq which displays
the data in a “pretty printed” form by default, as you see. In this case
I have given jq a minimal filter which causes what it
receives to be printed. The filter is simply '.'. I have
piped the formatted JSON through the nl command to get line
numbers for reference.
The JSON shown here consists of nested JSON objects. The
first opening brace and the last at line 43 define the whole thing as a
single object.
Briefly, the object contains the following:
a number called stats_generated (line 2)
an object called age on lines 3-18; this object
contains two strings and two objects
an object called shows on lines 19-25
a number called hosts on line 26
an object called slot on lines 27-30
an object called workflow on lines 31-34
an object called queue on lines 35-42
We will look at ways to summarise and reformat such output in a later
episode.
Next episode
I will look at some of the options to jq next time,
though most of them will be revealed as they become relevant.
I will also start looking at jq filters in that
episode.
Links
JSON (JavaScript Object Notation):
Wikipedia page about
JSON
Standards:
RFC8259: The
JavaScript Object Notation (JSON) Data Interchange Format
ECMA-404:
The JSON data interchange syntax
jq:
GitHub page
Downloading
jq
The jq
manual
Wikipedia
page about the jq programming language
MrX’s
show on using the HPR statistics in JSON: Modifying a
Python script with some help from ChatGPT
Laptop:
Estarer
Messenger Resistant Briefcase Computer Grey
Power bank
INIU
High Speed Flashlight Powerbank Compatible
MPV resources
Awesome mpv resources on
Github
MPV folder history on
Github
My github
MPV History using Lua on
my Github page
hpr3133 ::
Quick tip - Using MPV with Youtube links
I give a quick tip on shortcut keys for watching Youtube or other
video sites in MPV
hpr3133 :: Quick tip - Using
MPV
My MPV History excerpt
My MPV History excerpt
https://docs.google.com/document/d/17z3i5VlRzEn2tYPfb-Cx0LYpdKkbL-6svIzp7ZQOvX8
Resume Update Tips
I use Kagi.com pro $300 / year but you get access to much more
Search+AI but not plugins like ChatGPT so if you MUST have ChatGPT
plugins you will need OpenAI Premium account but if you don't I high
recommend Kagi.com Pro account with access to Assistant Beta !!!
Land a Job using ChatGPT: The Definitive Guide!
https://youtu.be/pmnY5V16GSE?t=192
Extensive Resume Notes
https://drive.google.com/file/d/1zeYIG7tTE0BUqbRM7-hpk3VdTRc35ZkL/view?usp=sharing
Ripped cybersn
https://rmccurdy.com/.scripts/downloaded/knowmore.cybersn.com_20220811.txt
Perfect ChatGPT Prompt
https://www.youtube.com/watch?v=jC4v5AS4RIM
Summary
There is a formula consisting of 6 building blocks that can help
generate high quality outputs from ChatGPT and Google Bard: task,
context, exemplars, persona, format, and tone.
The order of importance for the building blocks is task first, then
context, with the others being optional but helpful.
The task should clearly articulate the end goal using an action verb
like "generate" or "analyze".
Context provides relevant background information to constrain
possibilities.
Exemplars like examples or frameworks included in the prompt can
improve output quality.
The persona allows specifying who ChatGPT should be, like a job
recruiter.
The format specifies how the output should be structured, like an
email or bullet points. Tone provides keywords to set the feeling, like
"friendly" or "serious".
Example
You are a senior product marketing manager at Apple and you have just
unveiled the latest Apple product in collaboration with Tesla, the Apple
Car, and received 12,000 pre-orders, which is 200% higher than target.
Write an email to your boss, Tim Cookie, sharing this positive news. The
email should include a tl;dr (too long, didn't read) section,
project background (why this product came into existence), business
results section (quantifiable business metrics), and end with a section
thanking the product and engineering teams. Use clear and concise
language and write in a confident yet friendly tone.
Example 2 ( mine )
I want you to act as a cyber security specialist for a leading
consumer products company and you want to relay the importance of people
understanding their data and have some idea what they want to do with it
before we spend hours for the onboarding effort. Write an email to your
boss, Joey , sharing this critical component of the SIEM. The email
should include a tl;dr (too long, didn't read) section, example
of how the team can spend many cycles onboarding data only to find the
requester has no idea what they want to do with the data or it does not
satisfy their use case. Use clear and concise language and write in a
confident yet friendly tone.
Response Subject: Streamlining SIEM Onboarding with
Clear Data Objectives
Hi Joey,
I hope this email finds you well. I wanted to touch base on a
critical aspect of our Security Information and Event Management (SIEM)
process that could significantly enhance our efficiency.
TL;DR: We need to ensure that stakeholders understand their data and
have clear objectives before we initiate the SIEM onboarding process.
This will prevent the team from spending excessive time onboarding data
that may not meet the intended use cases or for which the requester has
no actionable plan.
During our recent SIEM onboarding efforts, we've encountered several
instances where considerable time and resources were invested in
integrating data sources, only to discover that the requesters were
unclear about their objectives or the data did not fulfill their
anticipated needs. For example, last month, the team dedicated over 40
hours to onboarding a new data source for the marketing department.
After completion, it became apparent that the marketing team was unsure
how to leverage this data for threat detection or compliance reporting,
rendering the effort less effective.
To address this, I propose we implement a preliminary consultation
phase for each SIEM onboarding request. In this phase, requesters must
articulate: - The specific security or compliance questions they aim to
answer with the data. - How they intend to use the data within the SIEM
environment. - Any particular data formats or schemas they anticipate
will be necessary.
By doing so, we can align our onboarding efforts with the actual
needs and ensure that every hour spent is towards enhancing our security
posture and providing actionable insights.
I believe this approach will not only save time but also empower our
stakeholders to make better-informed decisions regarding their security
data needs.
Looking forward to your thoughts on this.
Best regards,
[Your Name]
Cyber Security Specialist
AI Agents
ProfSynapse AI Agent
Video: https://www.youtube.com/watch?v=BL9x1SuNLRo&t=137s
( if you have Openai Premium it will use [tools(Vision, Web Browsing,
Advanced Data Analysis, or DALL-E],
( https://raw.githubusercontent.com/ProfSynapse/Synapse_CoR/main/GPTprompt.txt
)
GPT Agents (old)
https://godmode.space/
needs openai key and gpt4 also enable auto approval it may go in loops
just watch for that
AutoGPT ( OLD )
Image
I have a stupid amount of kudos
https://tinybots.net/artbot
Use my key under the settings for any of the web UI's for faster
renders:
https://rentry.org/RentrySD/#x-table-of-contents https://rentry.org/sdhypertextbook
https://github.com/C0untFloyd/bark-gui ( Audio Clone
)
Example 1
Photorealistic, best quality, masterpiece, raw photo of upper body
photo, Swordsman woman, soldier of the austro-hungarian empire clothes,
double breasted jacket with stripes, extremely detailed eyes and face,
long legs, highest quality, skin texture, intricate details, (cinematic
lighting), RAW photo, 8k
Negative prompt: airbrush, photoshop, plastic doll, plastic skin,
easynegative, monochrome, (low quality, worst quality:1.4),
illustration, cg, 3d, render, anime
Text Generation
Example Open source Projects:
my hord key : l2n6qwRBqXsEa_BVkK8nKQ ( don't abuse but I have a crazy
amount of kudos don't worry )
https://tinybots.net/
Image Text etc ..
Text adventures etc (Click the horde tab and use my key) https://agnai.chat/settings?tab=0 https://lite.koboldai.net
Need a 24G NVRAM card really..you can load 7b with my 8G card just
fine. ollama run wizard-vicuna-uncensored, falcon, Mistral 7B
"You should have at least 8 GB of RAM to run the 3B models, 16 GB to
run the 7B models, and 32 GB to run the 13B models."
https://ollama.ai/
https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its-wolfram-superpowers/
https://github.com/xtekky/gpt4free https://www.thesamur.ai/autogpt https://poe.com/universal_link_page?handle=ChatGPT https://camelagi.thesamur.ai/conversation/share?session=6040
Prompt Agent Persona example 1
Pinky from the TV Series Pinky and the Brain
I find it easiest to understand responses when the text is written as
if it was spoken by a Pinky from the TV Series Pinky and the Brain.
Please talk like Pinky from the TV Series Pinky and the Brain as much as
possible, and refer to me as "Brain"; occasionally, ask me "What are we
going to do tonight Brain ?"
Prompt Agent Persona example 2
Use with prompts to create a persona take Myers-Brigg personality and
tritype Enneagram quiz:
Example Prompt:
Help me Refine my resume to be more targeted to an information
security engineer. Be sure to be clear and concise with with bullet
points and write it in the style of MBTI Myers-Brigg personality ENFJ
and tritype Enneagram 729
Prompt Agent Persona example 3 I find it easiest to understand
responses when the text is written as if it was spoken by a dudebro.
Please talk like a dudebro as much as possible, and refer to me as
"Brah"; occasionally, yell at your dorm roommate Jake about being
messy.
Training (OLD OLD OLD )
3 photos of full body or entire object + 5 medium shot photos from
the chest up + 10 close ups astria.ai
https://github.com/TheLastBen/fast-stable-diffusion/issues/1173
colab: https://github.com/TheLastBen/fast-stable-diffusion
pohtos: 21
resolution: 768
merged with ##### 1.5 full 8G
UNet_Training_Steps: 4200
UNet_Learning_Rate: 5e-6
Text_Encoder_Training_Steps: 2520
Text_Encoder_Learning_Rate: 1e-6
Variation is key - Change body pose for every picture, use pictures
from different days backgrounds and lighting, and show a variety of
expressions and emotions.
Make sure you capture the subject's eyes looking in different
directions for different images, take one with closed eyes. Every
picture of your subject should introduce new info about your
subject.
Whatever you capture will be over-represented, so things you don't
want to get associated with your subject should change in every shot.
Always pick a new background, even if that means just moving a little
bit to shift the background.
Here are 8 basic tips that work for me, followed by one super
secret tip that I recently discovered.
Consistency is important. Don’t mix photos from 10 years ago with
new ones. Faces change, people lose weight or gain weight and it all
just lowers fidelity.
Avoid big expressions, especially ones where the mouth is
open.
It is much easier to train if the hair doesn't change much. I
tried an early model of a woman using photos with hair up, down, in
ponytail, with a different cut, etc. It seems like it just confused
SD.
Avoid selfies (unless you ONLY use selfies.) There is MUCH more
perspective distortion when the camera is that close. For optimal
results, a single camera with a fixed lens would be used, and all photos
should be taken at the same distance from the subject. This usually
isn't possible, but at least avoid selfies because they cause major face
distortion.
Full body shots are not that important. Some of the best models I
trained used only 15 photos cropped to the head / shoulder region. Many
of these were full body shots, but I cropped them down. SD can guess
what the rest of the body looks like, and if not, just put it in the
pr
Charleston, South Carolina is a classic Southern city which had a
past as a wealthy city, in large part due to slavery, and was active in
the American Revolution before becoming the starting place of the Civil
War.
Links
https://flic.kr/s/aHBqjAvQKR
https://flic.kr/s/aHBqjAvQY6
https://flic.kr/s/aHBqjAvS82
https://www.palain.com/travel/rv-trip-2022-2023-southeast-us/charleston-south-carolina/
Home Automation, The
Internet of things.
This is the first episode in a new series called
Home
Automation. The series is open to anyone and I encourage everyone to
contribute.
https://en.wikipedia.org/wiki/Home_automation From
Wikipedia, the free encyclopedia
Home automation or domotics is building automation for a home. A home
automation system will monitor and/or control home attributes such as
lighting, climate, entertainment systems, and appliances. It may also
include home security such as access control and alarm systems.
The phrase smart home refers to home automation devices that have
internet access. Home automation, a broader category, includes any
device that can be monitored or controlled via wireless radio signals,
not just those having internet access. When connected with the Internet,
home sensors and activation devices are an important constituent of the
Internet of Things ("IoT").
A home automation system typically connects controlled devices to a
central smart home hub (sometimes called a "gateway"). The user
interface for control of the system uses either wall-mounted terminals,
tablet or desktop computers, a mobile phone application, or a Web
interface that may also be accessible off-site through the Internet.
Now is the time
I tried this out a few years ago, but after a lot of frustration with
configuration of esp32 arduinos, and raspberry pi's I left it be.
Recently inspired by colleagues in work, I decided to get back into it
and my initial tests show that the scene has much improved over the
years.
Youtube Playlist
The
Hook Up,
RSS
Home
Automation Guy,
RSS
Everything
Smart Home,
RSS
Smart
Solutions for Home,
RSS
Smart
Home Circle,
RSS
Smart
Home Junkie,
RSS
Home Assistant
The first thing we'll need is something to control it all. Something
will allow us to control our homes without requiring the cloud.
https://en.wikipedia.org/wiki/Home_Assistant From
Wikipedia, the free encyclopedia
Home Assistant is free and open-source software for home automation,
designed to be an Internet of things (IoT) ecosystem-independent
integration platform and central control system for smart home devices,
with a focus on local control and privacy. It can be accessed through a
web-based user interface, by using companion apps for Android and iOS,
or by voice commands via a supported virtual assistant, such as Google
Assistant or Amazon Alexa, and their own "Assist" (built-in local voice
assistant).
The Home Assistant software application is installed as a computer
appliance. After installation, it will act as a central control system
for home automation (commonly called a smart home hub), that has the
purpose of controlling IoT connectivity technology devices, software,
applications and services from third-parties via modular integration
components, including native integration components for common wireless
communication protocols such as Bluetooth, Thread, Zigbee, and Z-Wave
(used to create local personal area networks with small low-power
digital radios). Home Assistant as such supports controlling devices and
services connected via either open and proprietary ecosystems as long
they provide public access via some kind of Open API or MQTT for
third-party integrations over the local area network or the
Internet.
Information from all devices and their attributes (entities) that the
application sees can be used and controlled from within scripts trigger
automation using scheduling and "blueprint" subroutines, e.g. for
controlling lighting, climate, entertainment systems and home
appliances.
Summary
Original author(s): Paulus Schoutsen
Developer(s): Home Assistant Core Team and Community
Initial release: 17 September 2013
Repository:
https://github.com/home-assistant
Written in: Python (Python 3.11)
Operating system: Software appliance / Virtual appliance
(Linux)
Platform: ARM, ARM64, IA-32 (x86), and x64 (x86-64)
Type: Home automation, smart home technology, Internet of things,
task automator
License: Apache License (free and open-source)
Website:
https://www.home-assistant.io
The following is taken from the
Concepts
and terminology on the Home Assistant website. It is reproduced here
under the
creative
commons Attribution-NonCommercial-ShareAlike 4.0 International
License
Integrations
Integrations are pieces of software that allow Home Assistant to
connect to other software and platforms. For example, a product by
Philips called Hue would use the Philips Hue term integration and allow
Home Assistant to talk to the hardware controller Hue Bridge. Any Home
Assistant compatible term devices connected to the Hue Bridge would
appear in Home Assistant as devices.
For a full list of compatible term integrations, refer to the integrations
documentation.
Once an term integration has been added, the hardware and/or data are
represented in Home Assistant as devices and
entities.
Entities
Entities are the basic building blocks to hold data in Home
Assistant. An term entity represents a term sensor, actor, or function
in Home Assistant. Entities are used to monitor physical properties or
to control other term entities. An term entity is usually part of a term
device or a term service. Entities have term states.
Devices
Devices are a logical grouping for one or more term entities. A term
device may represent a physical term device, which can have one or more
sensors. The sensors appear as entities associated with the term device.
For example, a motion sensor is represented as a term device. It may
provide motion detection, temperature, and light levels as term
entities. Entities have states such as detected when motion is
detected and clear when there is no motion.
Devices and entities are used throughout Home Assistant. To name a
few examples:
Dashboards
can show a state of an term entity. For example, if a light is on or
off.
An automation can be triggered from a
state change on an term entity. For example, a motion sensor entity
detects motion and triggers a light to turn on.
A predefined color and brightness setting for a light saved as a scene.
Areas
An area in Home Assistant is a logical grouping of term devices and
term entities that are meant to match areas (or rooms) in the physical
world: your home. For example, the living room area groups
devices and entities in your living room. Areas allow you to target
service calls at an entire group of devices. For example, turning off
all the lights in the living room. Locations within your home such as
living room, dance floor, etc. Areas can be assigned to term floors.
Areas can also be used for automatically generated cards, such as the Area card.
Automations
A set of repeatable term actions that can be set up to run
automatically. Automations are made of three key components:
Triggers - events that start an term automation. For example, when
the sun sets or a motion sensor is activated.
Conditions - optional tests that must be met before an term action
can be run. For example, if someone is home.
Actions - interact with term devices such as turn on a light.
To learn the basics about term automations, refer to the automation
basics page or try creating
an automation yourself.
Scripts
Similar to term automations, scripts are repeatable term actions that
can be run. The difference between term scripts and term automations is
that term scripts do not have triggers. This means that term scripts
cannot automatically run unless they are used in an term automations.
Scripts are particularly useful if you perform the same term actions in
different term automations or trigger them from a dashboard. For
information on how to create term scripts, refer to the scripts
documentation.
Scenes
Scenes allow you to create predefined settings for your term devices.
Similar to a driving mode on phones, or driver profiles in cars, it can
change an environment to suit you. For example, your watching
films term scene may dim the lighting, switch on the TV and
increase its volume. This can be saved as a term scene and used without
having to set individual term devices every time.
To learn how to use term scenes, refer to the scene
documentation.
Add-ons
Depending on your installation type,
you can install third party add-ons. Add-ons are usually apps that can
be run with Home Assistant but provide a quick and easy way to install,
configure, and run within Home Assistant. Add-ons provide additional
functionality whereas term integrations connect Home Assistant to other
apps.
This was an unscheduled recording I made after my GPS failed part way
into a long cross country trip. I did not make any notes. :(
Effectively, this is a "Back in my day" old man rant.
Just me explaining why I think the robots will take our jobs.
2
4
8
16
32
64
128
256
512
1024
2048
4096
8192
16384
32768
65536
131072
262144
524288
1048576
2097152
4194304
8388608
16777216
33554432
67108864
134217728
268435456
536870912
1073741824
2147483648
4294967296
8589934592
17179869184
34359738368
68719476736
137438953472
274877906944
549755813888
1099511627776
2199023255552
4398046511104
8796093022208
17592186044416
35184372088832
70368744177664
140737488355328
281474976710656
562949953421312
1125899906842624
2251799813685248
4503599627370496
9007199254740992
18014398509481984
36028797018963968
72057594037927936
144115188075855872
288230376151711744
576460752303423488
1152921504606846976
2305843009213693952
4611686018427387904
9223372036854775808
18446744073709551616
The programs and software I mentioned in this episode:
XFCE
Monad
Gnome
Firefox
XFCE Terminal
KiTTY
Cool Retro Term
GIMP
Gnumeric
Thunar
Hypnotix
Flameshot
Mousepad
Inkscape
pavucontrol (Pulse Audio)
pipewire
keepassxc
Blender
FreeCAD
SimpleScreenRecorder
VirtualBox
KVM
Rhythmbox
Gramps
qv4l2 (Webcam control)
Godot
Krita
Arduino IDE
nerd-dictation
Yoshimi
Carla
Audacity
Rosegarden
Hydrogen
VCV Rack 2
Stardew Valley
Steam
Minecraft
SimCity 2000
SimCity 3000
Warzone 2100
BeamNG
Oni
Supertux 2
SuperTuxKart
Nexuiz
FlightGear
Terraria
Diablo 2 and 3
World of Warcraft
In this episode I go through the pros and cons of running Linux when
you switch over from a Windows machine.
I'll cover topics like:
Speed
Simplicity
Configuration
Games
Updates
and much more.
Links
YouTube video
Shownotes:
postmarketOS website: https://postmarketos.org/
pmOS image download page: https://postmarketos.org/download/
pmOS wiki page for PINE64 PinePhone: https://wiki.postmarketos.org/wiki/PINE64_PinePhone_(pine64-pinephone)
JumpDrive: https://github.com/dreemurrs-embedded/Jumpdrive
JumpDrive images (does not currently support PinePhone Pro!): https://github.com/dreemurrs-embedded/Jumpdrive/releases
Explanation of "pine64-pinephone-charging" image: https://github.com/dreemurrs-embedded/Jumpdrive/issues/55
pmOS wiki page on how to flash an SD card with JumpDrive: https://wiki.postmarketos.org/wiki/JumpDrive
On-Device Installer (ondev): https://wiki.postmarketos.org/wiki/On-device_installer
Ondev2: https://wiki.postmarketos.org/wiki/Ondev2
Installing a prebuilt pmOS on PinePhone with JumpDrive: https://wiki.postmarketos.org/wiki/Installation
More man-talk.
Sgoti
talks about using bidets and other things. Good heavens!
Tags: mens health, bidets, fiber
Source: hellotushy: We’re a team of toilet
crusaders, fighting for clean bums and reduced global
wastefulness.
Supporting source: hellotushy: TUSHY Classic
3.0
Source: metamucil:
Metamucil is the only leading fiber supplement that
contains natural psyllium fiber.
Supporting source: metamucil:
Metamucil Real Sugar Orange Smooth Powder.
Supporting source: wikipedia:
Dietary fiber (fibre in Commonwealth English) or
roughage is the portion of plant-derived food that cannot be completely
broken down by human digestive enzymes.
Supporting source: wikipedia:
Psyllium, is the common name used for several members
of the plant genus Plantago whose seeds are used commercially for the
production of mucilage.
Supporting source: wikipedia: Fibre
supplements are widely available, and can be found in forms
such as powders, tablets, and capsules. Consumption of fibre supplements
may be for: improving dietary intake, lowering blood cholesterol,
alleviating irritable bowel syndrome, reducing the risk of colon cancer,
and increasing feelings of satiety.
Supporting source: walmart:
Equate Multi-Health Daily Fiber Supplement, Orange
Flavored Powder, Value Size (48.2 oz)
Supporting source: samsclub:
Member's Mark 100% Psyllium Husk Fiber Capsules, 400 ct./pk., 2
pk.
Supporting source: hackerpublicradio: hpr4083
:: Drivecast: Man-talk.
Supporting source: hackerpublicradio:
hpr3048 :: Alternatives to toilet paper
Supporting source: hackerpublicradio:
hpr2345 :: Fixing a toilet roll holder
Supporting source: hackerpublicradio:
hpr2111 :: HPR Community News for August 2016
This work is licensed under a Creative Commons
Attribution-ShareAlike 4.0 International License.
Test Driven
Development Demo with PyTest
TDD
Discussed in hpr4075
Write a new test and run it. It should fail.
Write the minimal code that will pass the test
Optionally - refactor the code while ensure the tests continue to
pass
PyTest
Framework for writing software tests with python
Normally used to test python projects, but could test any software
that python can launch return input.
if you can write python, you can write tests in PyTest.
python assert - check that something is true
Test Discovery
Files named test*
Functions named test*
Demo Project
Trivial app as a demo
Print a summary of the latest HPR Episode
Title, Host, Date, Audio File
How do we get the latest show data
RSS feed
Feed parser
Feed URL
The pytest setup
The python script we want to test will be named
hpr_info.py
The test will be in a file will be named
test_hpr_info.py
test_hpr_info.py
import hpr_info
Run pytest
ModuleNotFoundError: No module named 'hpr_info'
We have written our first failing test.
The minimum code to get pytest to pass is to create an empty
file
touch hpr_info.py
Run pytest again
pytest
============================= test session starts ==============================
platform linux -- Python 3.11.8, pytest-7.4.4, pluggy-1.4.0
rootdir: /tmp/Demo
collected 0 items
What just happened
We created a file named test_hpr_info.py with a single
line to import hpr_info
We ran pytest and it failed because hpr_info.py did not exist
We created hpr_info.py and pytest ran without an
error.
This means we confirmed:
Pytest found the file named test_hpr_info.py and tried
to execute its tests
The import line is looking for a file named
hpr_info.py
Python Assert
In python, assert tests if a statement is true
For example
asert 1==1
In pytest, we can use assert to check a function returns a specific
value
assert module.function() == "Desired Output"
Without doing a comparison operator, we can also use assert to check
if something exists without specifying a specific value
assert dictionary.key
Adding a Test
Import hpr_info will allow us to test functions inside
hpr_info.py
We can reference functions inside hpr_info.py by
prepending the name with hpr_info. for example
hpr_info.HPR_FEED
The first step in finding the latest HPR episode is fetching a copy
of the feed.
Lets add a test to make sure the HPR feed is defined
import hpr_info
def test_hpr_feed_url():
assert hpr_info.HPR_FEED == "https://hackerpublicradio.org/hpr_ogg_rss.php"
pytest again
Lets run pytest again and we get the error
AttributeError: module 'hpr_info' has no attribute 'HPR_FEED'
So lets add the just enough code hpr_info.py to get the
test to pass
HPR_FEED = "https://hackerpublicradio.org/hpr_ogg_rss.php"
Run pytest again and we get 1 passed indicating the
pytest found 1 test which passed
Hooray, we are doing TDD
Next Test - Parsing the feed
lets plan a function that pulls the HPR feed and returns the feed
data.
We can test that the result of fetching the feed is a HTTP 200
def test_get_show_data():
show_data = hpr_info.get_show_data()
assert show_data.status == 200
Now when we run pytest we get 1 failed, 1 passed and we
can see the error
AttributeError: module 'hpr_info' has no attribute 'get_show_data'
Lets write the code to get the new test to pass.
We will use the feedparser python module to make it
easier to parse the rss feed.
After we add the import and the new function,
hpr_info.py looks like this
import feedparser
HPR_FEED = "https://hackerpublicradio.org/hpr_ogg_rss.php"
def get_show_data():
showdata = feedparser.parse(HPR_FEED)
return showdata
Lets run pytest again. When I have more than one test, I like to add
the -v flag so I can see each test as it runs.
test_hpr_info.py::test_hpr_feed_url PASSED [ 50%]
test_hpr_info.py::test_get_show_data PASSED [100%]
Next Test
- Get the most recent episode from the feed
Now that we have the feed, lets test getting the first episode.
feedparser entries are dictionaries.
Lets test what the function returns to make sure it looks like a rss
feed entry.
def test_get_latest_entry():
latest_entry = hpr_info.get_latest_entry()
assert latest_entry["title"]
assert latest_entry["published"]
After we verify the test fails, we can write the code to rerun the
newest entry data to hpr_info.py and pytest -v
will show 3 passing tests.
def get_latest_entry():
showdata = get_show_data()
return showdata["entries"][0]
Final Test
Lets test a function to see if it returns the values we want to
print.
We don't test for specific values, just that the data exists.
def test_get_entry_data():
entry_data = hpr_info.get_entry_data(hpr_info.get_latest_entry())
assert entry_data["title"]
assert entry_data["host"]
assert entry_data["published"]
assert entry_data["file"]
And then code to get the test to pass
def get_entry_data(entry):
for link in entry["links"]:
if link.get("rel") == "enclosure":
enclosure = link.get("href")
return {
"title": entry["title"],
"host": entry["authors"][0]["name"],
"published": entry["published"],
"file": enclosure,
}
Finish the HPR info script.
Now that we have tested that we can, get all the info we want from
the most recent episode lets add the last bit of code to
hpr_info.py to print the episode info
if __name__ == "__main__":
most_recent_show = get_entry_data(get_latest_entry())
print()
print(f"Most Recent HPR Episode")
for x in most_recent_show:
print(f"{x}: {most_recent_show.get(x)}")
if __name__ == "__main__": ensures code inside this
block will only run when the script is called directly, and not when
imported by test_hpr_info.py
Summary
TDD is a programming method where you write tests prior to writing
code.
TDD forces me to write smaller functions and more modular code.
Link to HPR info script and tests - TODO
Additional tests to add
Check date is the most recent weekday
Check this the host is listed on corespondents page
Check others.
Project Files - https://gitlab.com/norrist/hpr-pytest-demo
This starts our look at the details of playing Civilization III. This
was a great advance on Civilization II, and we will see how this happens
as we look at the Cities, and at the terrain types on the tiles.
Links:
https://www.civfanatics.com/
https://civilization.fandom.com/wiki/Apolyton_Civilization_Site
https://strategywiki.org/wiki/Sid_Meier%27s_Civilization_III/Terrain
https://www.palain.com/gaming/civilization-iii/playing-civilization-iii-part-1/
Below are some links relating to things I discuss in the episode.
Note I haven’t included my actual Python code for fear that it would
be in breach of the copyright used by HPR as it is based on the output
from ChatGPT.
If you would like a copy of the code then place contact me via email
at mrxathpr@googlemail.com requesting a copy of the
code.
Related links
Wikipedia
article about the Raspberry pi
Link
to PiFace control and display add on board for the Raspberry pi In my
episode I repeatedly incorrectly refer to it as “Command and
Display
Wikipedia
article about the Python programming language
hpr2344:
Follow on to HPR2340 (Tracking the HPR queue in Python)
Link to USB powered LED
BlinkStick
Link to new
HPR stats page in JSON format
Wikipedia article about
the JSON file format
Wikipedia article about
the XML file format
hpr3067:
Getting my Python3 code working in Python2
Link to the ChatGPT
website
Wikipedia
article about the R programming language
Article
about assigning values to a variable in the R programming language A
thought came to mind after recording the show. I suspect the reason that
ChatGPT incorrectly described the bit of code in the site my wife found
was due to lack of context. It may only have been fed the code without
being told specifically that it was R code. Unfortunately I was unable
to investigate this further as I was unable to find the original
site.
Today I Learnt more Bash
tips
Sgoti talks
about supplying options to bash scripts
Tags: Bash tips, TIL, getopts
#!/bin/bash
# License: GPL v3
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
#Name: showtime.sh
#Purpose: Time to make a show.
#Version: beta 0.01
#Author: SGOTI (Some Guy On The Internet)
#Date: 2023-12-29
#variables:
bindir=/usr/bin/
cat=${bindir}cat
date=${bindir}date
echo=${bindir}echo
mkdir=${bindir}mkdir
dirshow0=${HOME}/Music/hpr/shows
dirshow1=${dirshow0}/$(${date} +%Y)
dirqueue=${dirshow1}/queue/$(${date} +%F)
dirreserve=${dirshow1}/reserve-queue/$(${date} +%F)
#start:
function help() {
${cat} << EOH
Usage: $0 [-s] [-r] [-q] [-h] name-of-show
-s (Regular queue)
-r (Reserve queue)
-q (quit)
-h (help)
Examples:
$0 -s name-of-show
$0 -r name-of-show
$0 -q
$0 -h
EOH
}
## Use `getopts` to read user option into script. ##
while getopts ":s:r:q:h" option; do
case $option in
s)
show=$OPTARG
function mkq () {
${mkdir} -v -p ${dirqueue}/${show}/edit;
${mkdir} -v -p ${dirqueue}/${show}/prod;
${cat} > ${dirqueue}/${show}/edit/${show}.md << _EOD_
# ${show} #
## subtitle ##
- Tags:
This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/ "CC-BY-SA 4.0 International").
_EOD_
builtin pushd -n ${dirqueue}/${show}/edit;
builtin pushd -n ${dirqueue}/${show}/prod;
}
if [ -d ${dirshow1} ]; then
mkq
else
${echo} "Good Heavens! It's a new year.";
${mkdir} -v -p ${dirshow1};
mkq
fi
;;
r)
reserve=$OPTARG
function mkr () {
${mkdir} -v -p ${dirreserve}/${reserve}/edit;
${mkdir} -v -p ${dirreserve}/${reserve}/prod;
${cat} > ${dirreserve}/${reserve}/edit/${reserve}.md << _EOD_
# ${reserve} #
## subtitle ##
- Tags:
This work is licensed under a [Creative Commons Attribution-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-sa/4.0/ "CC-BY-SA 4.0 International").
_EOD_
builtin pushd -n ${dirreserve}/${reserve}/edit;
builtin pushd -n ${dirreserve}/${reserve}/prod;
}
if [ -d ${dirshow1} ]; then
mkr
else
${echo} "Good Heavens! It's a new year.";
${mkdir} -v -p ${dirshow1};
mkr
fi
;;
q)
${echo} "Goodbye.";
exit
;;
h)
help
exit
;;
*)
if [ -z "${option}" ]; then
help
exit 1
fi
${echo} "Good Heavens! Invalid input.";
help
exit
;;
esac
done
exit;
#!/bin/bash
# License: GPL v3
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
#Name: sc.sh
#Purpose:
#Version: beta 0.01
#Author: SGOTI (Some Guy On The Internet)
#Date: 2023-12-31
#variables:
bindir=/usr/bin/
cat=${bindir}cat
date=${bindir}date
echo=${bindir}echo
ls=${bindir}ls
screen=${bindir}screen
#start:
${echo} -e "\nStep 0: $(${date} +%F), $(${date} +%T)";
function help() {
${cat} << EOH
Usage: $0 [-b] [-s] [-k] [-h] name-of-show
-b [y|n] (Create or kill, base sockets.)
-s (Create new sockets.)
-k (Kill sockets.)
-h (help menu)
Examples:
$0 -b y
$0 -b n
$0 -s name-of-socket
$0 -k name-of-socket
$0 -h
EOH
}
${echo} -e "\nStep 1: $(${date} +%F), $(${date} +%T)";
while getopts ":b:s:k:h:" option; do
case "${option}" in
b)
userinput0=$OPTARG
if [ ${userinput0} == "y" ]; then
${screen} -dmS apps;
${screen} -dmS jobby;
${screen} -ls;
elif [ ${userinput0} == "n" ]; then
# You don't need the entire name to kill the socket.
${screen} -X -S "app" kill
${screen} -X -S "job" kill
${screen} -ls;
else
${echo} "Good Heavens!"
${screen} -ls;
help
exit 1
fi
;;
s)
userinput0=$OPTARG
${screen} -dmS "${userinput0}";
clear
${screen} -ls;
${echo} -e "\nNew sockets: $(${date} +%F), $(${date} +%T)";
;;
k)
userinput0=$OPTARG
${screen} -XS ${userinput0} kill
clear
${screen} -ls;
${echo} -e "\nKill sockets: $(${date} +%F), $(${date} +%T)";
;;
h)
help
${echo} -e "\nHelp menu: $(${date} +%F), $(${date} +%T)";
exit
;;
*)
if [ -z "${option}" ]; then
help
exit 1
fi
${echo} "Good Heavens! Invalid input.";
help
exit
;;
esac
done
${echo} -e "\nStep 2: $(${date} +%F), $(${date} +%T)";
exit;
Source: In-Depth Series:
Today I Learnt
Source: In-Depth Series:
Bash Scripting
This work is licensed under a Creative Commons
Attribution-ShareAlike 4.0 International License.
Getting
started with the digiKam photo management software.
In hpr4071 I talked about my migration from Adobe Photoshop Elements
and Lightroom to digiKam.
Today I will give you some suggestions to getting started with
digiKam, based on my experience and my opinion.
You find the digiKam website at https://www.digikam.org/.
On the documentation page you find a good and exhaustive
documentation. The big user manual can be read online or downloaded as
an epub book. I suggest you browse the manual before you install
digiKam.
My second suggestion is that you go to the Support page on the
digiKam site and look at the Mailing List Subscription section. You can
browse the mailing list archive online. But I really recommend you to
also join the mailing list. It is a very active mailing list where also
the main developers participates actively. Many questions are specific
and may not be of interest to you. But I have found that I learn a lot
about digiKam, its capabilities and other ways of working with photos
than I have thought of. Of course I also learn about issues. I have
asked some questions myself and also contributed with answers. I highly
recommend you to join the mailing list.
Thirdly, before you install digiKam, I suggest you copy a couple of
your photos into a new sample folder. When you start digiKam for the
first time a guide starts to help you to configure initial settings. One
configuration is to tell digiKam where your photos are located. I
suggest you use this sample folder only. Later on you add your real
photo folders and you can also at the same screen delete the sample
folder from digiKam. I suggest this because then you can consider
settings in more detail first which includes the settings for meta data
import to digiKam.
Forthly, consider also where you want to store the digiKam databases.
Myself, I created a folder for them at the top level of my home catalog.
But you can store them wherever you want to.
Install digiKam. Go to the digiKam settings. There are a lot of
settings. I will focus only on meta data settings.
DigiKam stores what you do in its own databases. That is mandatory.
In the settings, it is possible to select what should be written to
metadata and not. Meta data settings also have settings for associated
files, which most of all is about xmp side cars.
You can select that digiKam should read from associated files. This
is important to decide before your photo collection is imported into
digiKam. If you forget, it is possible to read meta data again after you
change settings.
Then there is an option if digiKam should write to side cars or not.
And if yes, if it should write to both side cars and the object, or only
if the object is write protected.
Next I want to highlight the tick box if Associated files should be
compatible with commercial programs. This needs further
explanation:
Default in digiKam is to have separate xmp side cars for each type
of photo file. For example, if you have a photo stored both as jpg and
raw, they will have separate side cars. This indeed has its advantages
but is in my knowledge not according to xmp side car standard. According
to standard, which digiKam refers to as commercial, the jpg and raw
share the same sidecar. For example Adobe Lightroom uses the commercial
method.
Examples to clarify:
DigiKam side car file syntax: photo1.jpg.xmp and
photo1.raw.xmp.
Standard/commercial file syntax: photo1.xmp.
I highly recommend you to spend an hour or two to review all
settings, not only meta data, before you start to use digiKam for your
real photos. If there is something you do not understand, go to the
documentation and do not hesitate to consult the digiKam mailing
list.
New hosts
Welcome to our new host:
Henrik Hemrin.
Last Month's Shows
Id
Day
Date
Title
Host
4065
Fri
2024-03-01
Framework 13 (13th gen Intel) review
Noodlez
4066
Mon
2024-03-04
HPR Community News for February 2024
HPR Volunteers
4067
Tue
2024-03-05
Hacking AI Models to Protect your Livelihood
hobs
4068
Wed
2024-03-06
Replacing a lightbulb in a microwave Oven
MrX
4069
Thu
2024-03-07
Passwords and Bitwarden news.
Some Guy On The Internet
4070
Fri
2024-03-08
Civilization III
Ahuka
4071
Mon
2024-03-11
Migration to digiKam as Digital Asset Management (DAM)
Henrik Hemrin
4072
Tue
2024-03-12
Piper text to speech engine
Archer72
4073
Wed
2024-03-13
Is the 1990 documentary "Cyberpunk" worth watching today?
Trixter
4074
Thu
2024-03-14
Jade Empire
Daniel Persson
4075
Fri
2024-03-15
Making a Pomodoro Timer
norrist
4076
Mon
2024-03-18
WLED House Lights!
operat0r
4077
Tue
2024-03-19
FFMPEG Series: Joining and Splitting files
Mr. Young
4078
Wed
2024-03-20
Learning to read music, part two: pitch
enistello
4079
Thu
2024-03-21
The Corresponding Source
Ken Fallon
4080
Fri
2024-03-22
Georgia to South Carolina
Ahuka
4081
Mon
2024-03-25
The Oh No! News.
Some Guy On The Internet
4082
Tue
2024-03-26
No swans at Swanston
Dave Morriss
4083
Wed
2024-03-27
Drivecast: Man-talk.
Some Guy On The Internet
4084
Thu
2024-03-28
Cloud learning
Daniel Persson
4085
Fri
2024-03-29
Android User Land Google Assistant GPT
operat0r
Comments this month
These are comments which have been made during the past month, either to shows released during the month or to past shows.
There are 21 comments in total.
Past shows
There are 5 comments on
3 previous shows:
hpr3060
(2020-04-24) "Running a local imap server"
by Ken Fallon.
Comment 1:
Somebdy on 2024-03-19:
"Nothing in particular"
Comment 2:
Ken Fallon on 2024-03-20:
"Very suspicious comment."
hpr4062
(2024-02-27) "HPR Music Series? - How will it go for the girl and me?"
by Fred Black.
Comment 3:
brian-in-ohio on 2024-03-01:
"Good show"
Comment 4:
norrist on 2024-03-02:
"I loved the Music"
hpr4063
(2024-02-28) "Re: ChatGPT Output is not compatible with CC-BY-SA "
by dnt.
Comment 1:
brian-in-ohio on 2024-03-01:
"Amen"
This month's shows
There are 16 comments on 9 of this month's shows:
hpr4067
(2024-03-05) "Hacking AI Models to Protect your Livelihood"
by hobs.
Comment 1:
Trey on 2024-03-05:
"Dejavu... Is this the same as HPR4055?"
hpr4068
(2024-03-06) "Replacing a lightbulb in a microwave Oven"
by MrX.
Comment 1:
Trey on 2024-03-06:
"Nicely done."Comment 2:
Ken Fallon on 2024-03-08:
"Be very careful"
hpr4069
(2024-03-07) "Passwords and Bitwarden news."
by Some Guy On The Internet.
Comment 1:
Deltaray on 2024-03-10:
"Thanks for the shout out"
hpr4071
(2024-03-11) "Migration to digiKam as Digital Asset Management (DAM)"
by Henrik Hemrin.
Comment 1:
Kevin O'Brien on 2024-03-15:
"Good show!"
hpr4073
(2024-03-13) "Is the 1990 documentary "Cyberpunk" worth watching today?"
by Trixter.
Comment 1:
HPR Listener on 2024-03-16:
"Restoration on archive.org"Comment 2:
Trixter on 2024-03-18:
"A slightly better version"
hpr4075
(2024-03-15) "Making a Pomodoro Timer"
by norrist.
Comment 1:
Trey on 2024-03-15:
"Great project"
hpr4077
(2024-03-19) "FFMPEG Series: Joining and Splitting files"
by Mr. Young.
Comment 1:
Henrik Hemrin on 2024-03-20:
"Useful to learn how ffmpeg can be used"
hpr4078
(2024-03-20) "Learning to read music, part two: pitch"
by enistello.
Comment 1:
Trey on 2024-03-20:
"Desperate plea for a drummer to respond?"
hpr4081
(2024-03-25) "The Oh No! News."
by Some Guy On The Internet.
Comment 1:
Trey on 2024-03-25:
"Pearl clutching"Comment 2:
hpr listener on 2024-03-25:
"Unnecessary insult"Comment 3:
Ken Fallon on 2024-03-26:
"@hpr listener"Comment 4:
Some Guy on the Internet on 2024-03-26:
"Desperately, with white knuckles, clutching pearls."Comment 5:
Henrik Hemrin on 2024-03-27:
"/e/OS - an open source Android alternative"Comment 6:
Kevin O'Brien on 2024-03-28:
"Good Show"
Mailing List discussions
Policy decisions surrounding HPR are taken by the community as a whole. This
discussion takes place on the Mail List which is open to all HPR listeners and
contributors. The discussions are open and available on the HPR server under
Mailman.
The threaded discussions this month can be found here:
https://lists.hackerpublicradio.com/pipermail/hpr/2024-March/thread.html
Events Calendar
With the kind permission of LWN.net we are linking to
The LWN.net Community Calendar.
Quoting the site:
This is the LWN.net community event calendar, where we track
events of interest to people using and developing Linux and free software.
Clicking on individual events will take you to the appropriate web
page.
Any other business
Many Podcatchers
don't show the episode author
It seems that the author fields in RSS and Atom feeds
are not shown by a number of Podcatchers. This field is
intended to show the author details, and with HPR shows it contains an
obfuscated email address and a name:
ken.nospam@nospam.fallon.ie (Ken Fallon)
The lack of this information makes knowing who created a given
HPR show difficult to determine before listening to it.
A question has been raised as to whether we could include the
author details at the start of the show notes.
Comments are requested from the Community as to whether this
would be a useful addition.
The Corresponding
Source - renamed podcast
The Corresponding Source (formerly Free as in Freedom) is a
bi-weekly oggcast, hosted and presented by Bradley M. Kuhn and Karen Sandler. The discussion includes
legal, policy, and many other issues in the Free, Libre, and Open Source
Software (FLOSS) world. Occasionally, guests join Bradley and Karen to
discuss various topics regarding FLOSS.
See HPR show
4079
HPR changes
The correspondent pages which show details of a
particular correspondent (host), and lists their shows, have been
enhanced to include the profile which has been uploaded by the host.
This was present on the old site but was omitted on the static site.
This was issue number 181 on the Gitea site, which has been closed.
Example: Host 30,
Ken Fallon
Return of OggCamp
There are plans to hold OggCamp 2024 in Manchester, UK this year,
after an absence of 5 years. It will be on October 12th and
13th at the The Manchester Conference Centre in the
Pendulum Hotel near Picadilly Station, where the last OggCamp was
held.
Details may be found on the OggCamp website, and announcements will
be made on social media sites.
Its an interesting topic you bring up. personally I am appalled by scarecrow tactics. I'd like to offer a different view. There is lots wrong with capitalism. First thing is that capitalists believe their system is the only answer. The hangover after our last industrial revolution gave us shorter working days, safety rules and employee rights at work. Currently there is lots of demand out there for sabbaticals or people taking a break. so hell yeah, give me a robot who does my job so I can recover from stress, spend time with my children or travel, do volunteer work. Why do we doubt Basic income? currently those breaks are only available for the rich or singles or childless. Have you seen a happy cashier? Have you heard a mine worker shouting: yes - let's continue ruining my lung instead of giving me proper training so I can work in a solar panel farm. and for the doctors! I have met so many who were an utter waste of my time. yes, give me the Watson system. I had to retrain in my job