DiscoverLock and Code
Lock and Code
Claim Ownership

Lock and Code

Author: Malwarebytes

Subscribed: 186Played: 3,213
Share

Description

Lock and Code tells the human stories within cybersecurity, privacy, and technology. Rogue robot vacuums, hacked farm tractors, and catastrophic software vulnerabilities—it’s all here.
123 Episodes
Reverse
Privacy is many things for many people.For the teenager suffering from a bad breakup, privacy is the ability to stop sharing her location and to block her ex on social media. For the political dissident advocating against an oppressive government, privacy is the protection that comes from secure, digital communications. And for the California resident who wants to know exactly how they’re being included in so many targeted ads, privacy is the legal right to ask a marketing firm how they collect their data.In all these situations, privacy is being provided to a person, often by a company or that company’s employees.The decisions to disallow location sharing and block social media users are made—and implemented—by people. The engineering that goes into building a secure, end-to-end encrypted messaging platform is done by people. Likewise, the response to someone’s legal request is completed by either a lawyer, a paralegal, or someone with a career in compliance.In other words, privacy, for the people who spend their days with these companies, is work. It’s their expertise, their career, and their to-do list.But what does that work actually entail?Today, on the Lock and Code podcast with host David Ruiz, we speak with Transcend Field Chief Privacy Officer Ron de Jesus about the responsibilities of privacy professionals today and how experts balance the privacy of users with the goals of their companies.De Jesus also explains how everyday people can meaningfully judge whether a company’s privacy “promises” have any merit by looking into what the companies provide, including a legible privacy policy and “just-in-time” notifications that ask for consent for any data collection as it happens.“When companies provide these really easy-to-use controls around my personal information, that’s a really great trigger for me to say, hey, this company, really, is putting their money where their mouth is.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Two weeks ago, the Lock and Code podcast shared three stories about home products that requested, collected, or exposed sensitive data online.There were the air fryers that asked users to record audio through their smartphones. There was the smart ring maker that, even with privacy controls put into place, published data about users’ stress levels and heart rates. And there was the smart, AI-assisted vacuum that, through the failings of a group of contractors, allowed an image of a woman on a toilet to be shared on Facebook.These cautionary tales involved “smart devices,” products like speakers, fridges, washers and dryers, and thermostats that can connect to the internet.But there’s another smart device that many folks might forget about that can collect deeply personal information—their cars.Today, the Lock and Code podcast with host David Ruiz revisits a prior episode from 2023 about what types of data modern vehicles can collect, and what the car makers behind those vehicles could do with those streams of information.In the episode, we spoke with researchers at Mozilla—working under the team name “Privacy Not Included”—who reviewed the privacy and data collection policies of many of today’s automakers.To put it shortly, the researchers concluded that cars are a privacy nightmare. According to the team’s research, Nissan said it can collect “sexual activity” information about consumers. Kia said it can collect information about a consumer’s “sex life.” Subaru passengers allegedly consented to the collection of their data by simply being in the vehicle. Volkswagen said it collects data like a person’s age and gender and whether they’re using your seatbelt, and it could use that information for targeted marketing purposes. And those are just the highlights. Explained Zoë MacDonald, content creator for Privacy Not Included: “We were pretty surprised by the data points that the car companies say they can collect… including social security number, information about your religion, your marital status, genetic information, disability status… immigration status, race.”In our full conversation from last year, we spoke with Privacy Not Included’s MacDonald and Jen Caltrider about the data that cars can collect, how that data can be shared, how it can be used, and whether consumers have any choice in the matter.Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensea...
The month, a consumer rights group out of the UK posed a question to the public that they’d likely never considered: Were their air fryers spying on them?By analyzing the associated Android apps for three separate air fryer models from three different companies, a group of researchers learned that these kitchen devices didn’t just promise to make crispier mozzarella sticks, crunchier chicken wings, and flakier reheated pastries—they also wanted a lot of user data, from precise location to voice recordings from a user’s phone.“In the air fryer category, as well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason,” the group wrote in its findings.While it may be easy to discount the data collection requests of an air fryer app, it is getting harder to buy any type of product today that doesn’t connect to the internet, request your data, or share that data with unknown companies and contractors across the world.Today, on the Lock and Code pocast, host David Ruiz tells three separate stories about consumer devices that somewhat invisibly collected user data and then spread it in unexpected ways. This includes kitchen utilities that sent data to China, a smart ring maker that published de-identified, aggregate data about the stress levels of its users, and a smart vacuum that recorded a sensitive image of a woman that was later shared on Facebook.These stories aren’t about mass government surveillance, and they’re not about spying, or the targeting of political dissidents. Their intrigue is elsewhere, in how common it is for what we say, where we go, and how we feel, to be collected and analyzed in ways we never anticipated.Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
The US presidential election is upon the American public, and with it come fears of “election interference.”But “election interference” is a broad term. It can mean the now-regular and expected foreign disinformation campaigns that are launched to sow political discord or to erode trust in American democracy. It can include domestic campaigns to disenfranchise voters in battleground states. And it can include the upsetting and increasing threats made to election officials and volunteers across the country.But there’s an even broader category of election interference that is of particular importance to this podcast, and that’s cybersecurity.Elections in the United States rely on a dizzying number of technologies. There are the voting machines themselves, there are electronic pollbooks that check voters in, there are optical scanners that tabulate the votes that the American public actually make when filling in an oval bubble with pen, or connecting an arrow with a solid line. And none of that is to mention the infrastructure that campaigns rely on every day to get information out—across websites, through emails, in text messages, and more.That interlocking complexity is only multiplied when you remember that each, individual state has its own way of complying with the Federal government’s rules and standards for running an election. As Cait Conley, Senior Advisor to the Director of the US Cybersecurity and Infrastructure Security Agency (CISA) explains in today’s episode:“There’s a common saying in the election space: If you’ve seen one state’s election, you’ve seen one state’s election.”How, then, are elections secured in the United States, and what threats does CISA defend against?Today, on the Lock and Code podcast with host David Ruiz, we speak with Conley about how CISA prepares and trains election officials and volunteers before the big day, whether or not an American’s vote can be “hacked,” and what the country is facing in the final days before an election, particularly from foreign adversaries that want to destabilize American trust.”There’s a pretty good chance that you’re going to see Russia, Iran, or China try to claim that a distributed denial of service attack or a ransomware attack against a county is somehow going to impact the security or integrity of your vote. And it’s not true.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and
On the internet, you can be shown an online ad because of your age, your address, your purchase history, your politics, your religion, and even your likelihood of having cancer.This is because of the largely unchecked “data broker” industry.Data brokers are analytics and marketing companies that collect every conceivable data point that exists about you, packaging it all into profiles that other companies use when deciding who should see their advertisements.Have a new mortgage? There are data brokers that collect that information and then sell it to advertisers who believe new homeowners are the perfect demographic to purchase, say, furniture, dining sets, or other home goods. Bought a new car? There are data brokers that collect all sorts of driving information directly from car manufacturers—including the direction you’re driving, your car’s gas tank status, its speed, and its location—because some unknown data model said somewhere that, perhaps, car drivers in certain states who are prone to speeding might be more likely to buy one type of product compared to another.This is just a glimpse of what is happening to essentially every single adult who uses the Internet today.So much of the information that people would never divulge to a stranger—like their addresses, phone numbers, criminal records, and mortgage payments—is collected away from view by thousands of data brokers. And while these companies know so much about people, the public at large likely know very little in return.Today, on the Lock and Code podcast with host David Ruiz, we speak with Cody Venzke, senior policy counsel with the ACLU, about how data brokers collect their information, what data points are off-limits (if any), and how people can protect their sensitive information, along with the harms that come from unchecked data broker activity—beyond just targeted advertising.“We’re seeing data that’s been purchased from data brokers used to make decisions about who gets a house, who gets an employment opportunity, who is offered credit, who is considered for admission into a university.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Online scammers were seen this August stooping to a new low—abusing local funerals to steal from bereaved family and friends.Cybercrime has never been a job of morals (calling it a “job” is already lending it too much credit), but, for many years, scams wavered between clever and brusque. Take the “Nigerian prince” email scam which has plagued victims for close to two decades. In it, would-be victims would receive a mysterious, unwanted message from alleged royalty, and, in exchange for a little help in moving funds across international borders, would be handsomely rewarded.The scam was preposterous but effective—in fact, in 2019, CNBC reported that this very same “Nigerian prince” scam campaign resulted in $700,000 in losses for victims in the United States.Since then, scams have evolved dramatically.Cybercriminals today willl send deceptive emails claiming to come from Netflix, or Google, or Uber, tricking victims into “resetting” their passwords. Cybercriminals will leverage global crises, like the COVID-19 pandemic, and send fraudulent requests for donations to nonprofits and hospital funds. And, time and again, cybercriminals will find a way to play on our emotions—be they fear, or urgency, or even affection—to lure us into unsafe places online.This summer, Malwarebytes social media manager Zach Hinkle encountered one such scam, and it happened while attending a funeral for a friend. In a campaign that Malwarebytes Labs is calling the “Facebook funeral live stream scam,” attendees at real funerals are being tricked into potentially signing up for a “live stream” service of the funerals they just attended.Today on the Lock and Code podcast with host David Ruiz, we speak with Hinkle and Malwarebytes security researcher Pieter Arntz about the Facebook funeral live stream scam, what potential victims have to watch out for, and how cybercriminals are targeting actual, grieving family members with such foul deceit. Hinkle also describes what he felt in the moment of trying to not only take the scam down, but to protect his friends from falling for it.“You’re grieving… and you go through a service and you’re feeling all these emotions, and then the emotion you feel is anger because someone is trying to take advantage of friends and loved ones, of somebody who has just died. That’s so appalling”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code...
On August 15, the city of San Francisco launched an entirely new fight against the world of deepfake porn—it sued the websites that make the abusive material so easy to create.“Deepfakes,” as they’re often called, are fake images and videos that utilize artificial intelligence to swap the face of one person onto the body of another. The technology went viral in the late 2010s, as independent film editors would swap the actors of one film for another—replacing, say, Michael J. Fox in Back to the Future with Tom Holland.But very soon into the technology’s debut, it began being used to create pornographic images of actresses, celebrities, and, more recently, everyday high schoolers and college students. Similar to the threat of “revenge porn,” in which abusive exes extort their past partners with the potential release of sexually explicit photos and videos, “deepfake porn” is sometimes used to tarnish someone’s reputation or to embarrass them amongst friends and family.But deepfake porn is slightly different from the traditional understanding of “revenge porn” in that it can be created without any real relationship to the victim. Entire groups of strangers can take the image of one person and put it onto the body of a sex worker, or an adult film star, or another person who was filmed having sex or posing nude.The technology to create deepfake porn is more accessible than ever, and it’s led to a global crisis for teenage girls.In October of 2023, a reported group of more than 30 girls at a high school in New Jersey had their likenesses used by classmates to make sexually explicit and pornographic deepfakes. In March of this year, two teenage boys were arrested in Miami, Florida for allegedly creating deepfake nudes of male and female classmates who were between the ages of 12 and 13. And at the start of September, this month, the BBC reported that police in South Korea were investigating deepfake pornography rings at two major universities.While individual schools and local police departments in the United States are tackling deepfake porn harassment as it arises—with suspensions, expulsions, and arrests—the process is slow and reactive.Which is partly why San Francisco City Attorney David Chiu and his team took aim at not the individuals who create and spread deepfake porn, but at the websites that make it so easy to do so.Today, on the Lock and Code podcast with host David Ruiz, we speak with San Francisco City Attorney David Chiu about his team’s lawsuit against 16 deepfake porn websites, the city’s history in protecting Californians, and the severity of abuse that these websites offer as a paid service.“At least one of these websites specifically promotes the non-consensual nature of this. I’ll just quote: ‘Imagine wasting time taking her out on dates when you can just use website X to get her nudes.’”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensea...
On August 24, at an airport just outside of Paris, a man named Pavel Durov was detained for questioning by French investigators. Just days later, the same man was charged in crimes related to the distribution of child pornography and illicit transactions, such as drug trafficking and fraud.Durov is the CEO and founder of the messaging and communications app Telegram. Though Durov holds citizenship in France and the United Arab Emirates—where Telegram is based—he was born and lived for many years in Russia, where he started his first social media company, Vkontakte. The Facebook-esque platform gained popularity in Russia, not just amongst users, but also the watchful eye of the government.Following a prolonged battle regarding the control of Vkontake—which included government demands to deliver user information and to shut down accounts that helped organize protests against Vladimir Putin in 2012—Durov eventually left the company and the country all together.But more than 10 years later, Durov is once again finding himself a person of interest for government affairs, facing several charges now in France where, while he is not in jail, he has been ordered to stay.After Durov’s arrest, the X account for Telegram responded, saying:“Telegram abides by EU laws, including the Digital Services Act—its moderation is within industry standards and constantly improving. Telegram’s CEO Pavel Durov has nothing to hide and travels frequently in Europe. It is absurd to claim that a platform or its owner are responsible for abuse of the platform.”But how true is that?In the United States, companies themselves, such as YouTube, X (formerly Twitter), and Facebook often respond to violations of “copyright”—the protection that gets violated when a random user posts clips or full versions of movies, television shows, and music. And the same companies get involved when certain types of harassment, hate speech, and violent threats are posted on public channels for users to see.This work, called “content moderation,” is standard practice for many technology and social media platforms today, but there’s a chance that Durov’s arrest isn’t related to content moderation at all. Instead, it may be related to the things that Telegram users say in private to one another over end-to-end encrypted chats.Today, on the Lock and Code podcast with host David Ruiz, we speak with Electronic Frontier Foundation Director of Cybersecurity Eva Galperin about Telegram, its features, and whether Durov’s arrest is an escalation of content moderation gone wrong or the latest skirmish in government efforts to break end-to-end encryption.“Chances are that these are requests around content that Telegram can see, but if [the requests] touch end-to-end encrypted content, then I have to flip tables.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 License
Every age group uses the internet a little bit differently, and it turns out for at least one Gen Z teen in the Bay Area, the classic approach to cyberecurity—defending against viruses, ransomware, worms, and more—is the least of her concerns. Of far more importance is Artificial Intelligence (AI).Today, the Lock and Code podcast with host David Ruiz revisits a prior episode from 2023 about what teenagers fear the most about going online. The conversation is a strong reminder that when America’s youngest generations experience online is far from the same experience that Millennials, Gen X’ers, and Baby Boomers had with their own introduction to the internet.Even stronger proof of this is found in recent research that Malwarebytes debuted this summer about how people in committed relationships share their locations, passwords, and devices with one another. As detailed in the larger report, “What’s mine is yours: How couples share an all-access pass to their digital lives,” Gen Z respondents were the most likely to say that they got a feeling of safety when sharing their locations with significant others.But a wrinkle appeared in that behavior, according to the same research: Gen Z was also the most likely to say that they only shared their locations because their partners forced them to do so.In our full conversation from last year, we speak with Nitya Sharma about how her “favorite app” to use with friends is “Find My” on iPhone, the dangers are of AI “sneak attacks,” and why she simply cannot be bothered about malware. “I know that there’s a threat of sharing information with bad people and then abusing it, but I just don’t know what you would do with it. Show up to my house and try to kill me?” Tune in today to listen to the full conversation.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Somewhere out there is a romantic AI chatbot that wants to know everything about you. But in a revealing overlap, other AI tools—which are developed and popularized by far larger companies in technology—could crave the very same thing.For AI tools of any type, our data is key.In the nearly two years since OpenAI unveiled ChatGPT to the public, the biggest names in technology have raced to compete. Meta announced Llama. Google revealed Gemini. And Microsoft debuted Copilot.All these AI features function in similar ways: After having been trained on mountains of text, videos, images, and more, these tools answer users’ questions in immediate and contextually relevant ways. Perhaps that means taking a popular recipe and making it vegetarian friendly. Or maybe that involves developing a workout routine for someone who is recovering from a new knee injury.Whatever the ask, the more data that an AI tool has already digested, the better it can deliver answers.Interestingly, romantic AI chatbots operate in almost the same way, as the more information that a user gives about themselves, the more intimate and personal the AI chatbot’s responses can appear.But where any part of our online world demands more data, questions around privacy arise.Today, on the Lock and Code podcast with host David Ruiz, we speak with Zoë MacDonald, content creator for Privacy Not Included at Mozilla about romantic AI tools and how users can protect their privacy from ChatGPT and other AI chatbots.When in doubt, MacDonald said, stick to a simple rule:“I would suggest that people don’t share their personal information with an AI chatbot.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
In the world of business cybersecurity, the powerful technology known as “Security Information and Event Management” is sometimes thwarted by the most unexpected actors—the very people setting it up.Security Information and Event Management—or SIEM—is a term used to describe data-collecting products that businesses rely on to make sense of everything going on inside their network, in the hopes of catching and stopping cyberattacks. SIEM systems can log events and information across an entire organization and its networks. When properly set up, SIEMs can collect activity data from work-issued devices, vital servers, and even the software that an organization rolls out to its workforce. The purpose of all this collection is to catch what might easily be missed.For instance, SIEMs can collect information about repeated login attempts occurring at 2:00 am from a set of login credentials that belong to an employee who doesn’t typically start their day until 8:00 am. SIEMs can also collect whether the login credentials of an employee with typically low access privileges are being used to attempt to log into security systems far beyond their job scope. SIEMs must also take in the data from an Endpoint Detection and Response (EDR) tool, and they can hoover up nearly anything that a security team wants—from printer logs, to firewall logs, to individual uses of PowerShell.But just because a SIEM can collect something, doesn’t necessarily mean that it should.Log activity for an organization of 1,000 employees is tremendous, and the collection of frequent activity could bog down a SIEM with noise, slow down a security team with useless data, and rack up serious expenses for a company.Today, on the Lock and Code podcast with host David Ruiz, we speak with Microsoft cloud solution architect Jess Dodson about how companies and organizations can set up, manage, and maintain their SIEMs, along with what advertising pitfalls to avoid when doing their shopping. Plus, Dodson warns about one of the simplest mistakes in trying to save budget—setting up arbitrary data caps on collection that could leave an organization blind.“A small SMB organization … were trying to save costs, so they went and looked at what they were collecting and they found their biggest ingestion point,” Dodson said. “And what their biggest ingestion point was was their Windows security events, and then they looked further and looked for the event IDs that were costing them the most, and so they got rid of those.”Dodson continued:“Problem was the ones they got rid of were their Log On/Log Off events, which I think most people would agree is kind of important from a security perspective.”Tune in today to listen to the full conversation.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good...
Full-time software engineer and part-time Twitch streamer Ali Diamond is used to seeing herself on screen, probably because she’s the one who turns the camera on.But when Diamond received a Direct Message (DM) on Twitter earlier this year, she learned that her likeness had been recreated across a sample of AI-generated images, entirely without her consent.On the AI art sharing platform Civitai, Diamond discovered that a stranger had created an “AI image model” that was fashioned after her. The model was available for download so that, conceivably, other members of the community could generate their own images of Diamond—or, at least, the AI version of her. To show just what the AI model was capable of, its creator shared a few examples of what he’d made: There was AI Diamond standing what looked at a music festival, AI Diamond with her head tilted up and smiling, and AI Diamond wearing, what the real Diamond would later describe, as an “ugly ass ****ing hat.”AI image generation is seemingly lawless right now.Popular AI image generators, like Stable Diffusion, Dall-E, and Midjourney, have faced valid criticisms from human artists that these generators are copying their labor to output derivative works, a sort of AI plagiarism. AI image moderation, on the other hand, has posed a problem not only for AI art communities, but for major social media networks, too, as anyone can seemingly create AI-generated images of someone else—without that person’s consent—and distribute those images online. It happened earlier this year when AI-generated, sexually explicit images of Taylor Swift were seen by millions of people on Twitter before the company took those images down.In that instance, Swift had the support of countless fans who reported each post they found on Twitter that shared the images.But what happens when someone has to defend themselves against an AI model made of their likeness, without their consent?Today, on the Lock and Code podcast with host David Ruiz, we speak with Ali Diamond about finding an AI model of herself, what the creator had to say about making the model, and what the privacy and security implications are for everyday people whose likenesses have been stolen against their will.For Diamond, the experience was unwelcome and new, as she’d never experimented using AI image generation on herself.“I’ve never put my face into any of those AI services. As someone who has a love of cybersecurity and an interest in it… you’re collecting faces to do what?”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 License
More than 20 years ago, a law that the United States would eventually use to justify the warrantless collection of Americans’ phone call records actually started out as a warning sign against an entirely different target: Libraries.Not two months after terrorists attacked the United States on September 11, 2001, Congress responded with the passage of The USA Patriot Act. Originally championed as a tool to fight terrorism, The Patriot Act, as introduced, allowed the FBI to request “any tangible things” from businesses, organizations, and people during investigations into alleged terrorist activity. Those “tangible things,” the law said, included “books, records, papers, documents, and other items.”Or, to put it a different way: things you’d find in a library and records of the things you’d check out from a library. The concern around this language was so strong that this section of the USA Patriot Act got a new moniker amongst the public: “The library provision.”The Patriot Act passed, and years later, the public was told that, all along, the US government wasn’t interested in library records.But those government assurances are old.What remains true is that libraries and librarians want to maintain the privacy of your records. And what also remains true is that the government looks anywhere it can for information to aid investigations into national security, terrorism, human trafficking, illegal immigration, and more.What’s changed, however, is that companies that libraries have relied on for published materials and collections—Thomson Reuters, Reed Elsevier, Lexis Nexis—have reimagined themselves as big data companies. And they’ve lined up to provide newly collected data to the government, particularly to agencies like Immigration and Customs Enforcement, or ICE.There are many layers to this data web, and libraries are seemingly stuck in the middle.Today, on the Lock and Code podcast with host David Ruiz, we speak with Sarah Lamdan, deputy director Office of Intellectual Freedom at the American Library Association, about library privacy in the digital age, whether police are legitimately interested in what the public is reading, and how a small number of major publishing companies suddenly started aiding the work of government surveillance:“Because to me, these companies were information providers. These companies were library vendors. They’re companies that we work with because they published science journals and they published court reporters. I did not know them as surveillance companies.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity,...
🎶 Ready to know what Malwarebytes knows?Ask us your questions and get some answers.What is a passphrase and what makes it—what’s the word?Strong? 🎶Every day, countless readers, listeners, posters, and users ask us questions about some of the most commonly cited topics and terminology in cybersecurity. What are passkeys? Is it safer to use a website or an app? How can I stay safe from a ransomware attack? What is the dark web? And why can’t cybercriminals simply be caught and stopped?For some cybersecurity experts, these questions may sound too “basic”—easily researched online and not worth the time or patience to answer. But those experts would be wrong.In cybersecurity, so much of the work involves helping people take personal actions to stay safe online. That means it’s on cybersecurity companies and practitioners to provide clarity when the public is asking for it. it’s on us to provide clarity. Without this type of guidance, people are less secure, scammers are more successful, and clumsy, fixable mistakes are rarely addressed.This is why, this summer, Malwarebytes is working harder on meeting people where they are. For weeks, we’ve been collecting questions from our users about WiFi security, data privacy, app settings, device passcodes, and identity protection.All of these questions—no matter their level of understanding—are appreciated, as they help the team at Malwarebytes understand where to improve its communication. In cybersecurity, it is critical to create an environment where, for every single person seeking help, it’s safe to ask. It’s safe to ask what’s on their mind, safe to ask what confuses them, and safe to ask what they might even find embarrassing.Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes Product Marketing Manager Tjitske de Vries about the modern rules around passwords, the difficulties of stopping criminals on the dark web, and why online scams hurt people far beyond their financial repercussions.“We had [an] 83-year-old man who was afraid to talk to his wife for three days because he had received… a sextortion scam… This is how they get people, and it’s horrible.”Tune in todayYou can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
This is a story about how the FBI got everything it wanted.For decades, law enforcement and intelligence agencies across the world have lamented the availability of modern technology that allows suspected criminals to hide their communications from legal scrutiny. This long-standing debate has sometimes spilled into the public view, as it did in 2016, when the FBI demanded that Apple unlock an iPhone used during a terrorist attack in the California city of San Bernardino. Apple pushed back on the FBI’s request, arguing that the company could only retrieve data from the iPhone in question by writing new software with global consequences for security and privacy.“The only way to get information—at least currently, the only way we know,” said Apple CEO Tim Cook, “would be to write a piece of software that we view as sort of the equivalent of cancer.”The standoff held the public’s attention for months, until the FBI relied on a third party to crack into the device.But just a couple of years later, the FBI had obtained an even bigger backdoor into the communication channels of underground crime networks around the world, and they did it almost entirely off the radar.It all happened with the help of Anom, a budding company behind an allegedly “secure” phone that promised users a bevvy of secretive technological features, like end-to-end encrypted messaging, remote data wiping, secure storage vaults, and even voice scrambling. But, unbeknownst to Anom’s users, the entire company was a front for law enforcement. On Anom phones, every message, every photo, every piece of incriminating evidence, and every order to kill someone, was collected and delivered, in full view, to the FBI.Today, on the Lock and Code podcast with host David Ruiz, we speak with 404 Media cofounder and investigative reporter Joseph Cox about the wild, true story of Anom. How did it work, was it “legal,” where did the FBI learn to run a tech startup, and why, amidst decades of debate, are some people ignoring the one real-life example of global forces successfully installing a backdoor into a company?The public…and law enforcement, as well, [have] had to speculate about what a backdoor in a tech product would actually look like. Well, here’s the answer. This is literally what happens when there is a backdoor, and I find it crazy that not more people are paying attention to it.Joseph Cox, author, Dark Wire, and 404 Media cofounderTune in today.Cox’s investigation into Anom, presented in his book titled Dark Wire, publishes June 4.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music:...
The irrigation of the internet is coming.For decades, we’ve accessed the internet much like how we, so long ago, accessed water—by traveling to it. We connected (quite literally), we logged on, and we zipped to addresses and sites to read, learn, shop, and scroll. Over the years, the internet was accessible from increasingly more devices, like smartphones, smartwatches, and even smart fridges. But still, it had to be accessed, like a well dug into the ground to pull up the water below.Moving forward, that could all change.This year, several companies debuted their vision of a future that incorporates Artificial Intelligence to deliver the internet directly to you, with less searching, less typing, and less decision fatigue. For the startup Humane, that vision includes the use of the company’s AI-powered, voice-operated wearable pin that clips to your clothes. By simply speaking to the AI pin, users can text a friend, discover the nutritional facts about food that sits directly in front of them, and even compare the prices of an item found in stores with the price online.For a separate startup, Rabbit, that vision similarly relies on a small, attractive smart-concierge gadget, the R1. With the bright-orange slab designed in coordination by the company Teenage Engineering, users can hail an Uber to take them to the airport, play an album on Spotify, and put in a delivery order for dinner.Away from physical devices, The Browser Company of New York is also experimenting with AI in its own web browser, Arc. In February, the company debuted its endeavor to create a “browser that browses for you” with a snazzy video that showed off Arc’s AI capabilities to create unique, individualized web pages in response to questions about recipes, dinner reservations, and more.But all these small-scale projects, announced in the first month or so of 2024, had to make room a few months later for big-money interest from the first ever internet conglomerate of the world—Google. At the company’s annual Google I/O conference on May 14, VP and Head of Google Search Liz Reid pitched the audience on an AI-powered version of search in which “Google will do the Googling for you.”Now, Reid said, even complex, multi-part questions can be answered directly within Google, with no need to click a website, evaluate its accuracy, or flip through its many pages to find the relevant information within.This, it appears, could be the next phase of the internet… and our host David Ruiz has a lot to say about it.Today, on the Lock and Code podcast, we bring back Director of Content Anna Brading and Cybersecurity Evangelist Mark Stockley to discuss AI-powered concierges, the value of human choice when so many small decisions could be taken away by AI, and, as explained by Stockley, whether the appeal of AI is not in finding the “best” vacation, recipe, or dinner reservation, but rather the best of anything for its user.“It’s not there to tell you what the best chocolate chip cookie in the world is for everyone. It’s there to help you figure out what the best chocolate chip cookie is for you, on a Monday evening, when the weather’s hot, and you’re hungry.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at
You’ve likely felt it: The dull pull downwards of a smartphone scroll. The “five more minutes” just before bed. The sleep still there after waking. The edges of your calm slowly fraying.After more than a decade of our most recent technological experiment, in turns out that having the entirety of the internet in the palm of your hands could be … not so great. Obviously, the effects of this are compounded by the fact that the internet that was built after the invention of the smartphone is a very different internet than the one before—supercharged with algorithms that get you to click more, watch more, buy more, and rest so much less.But for one group, in particular, across the world, the impact of smartphones and constant social media may be causing an unprecedented mental health crisis: Young people.According to the American College Health Association, the percentage of undergraduates in the US—so, mainly young adults in college—who were diagnosed with anxiety increased 134% since 2010. In the same time period for the same group, there was in increase in diagnoses of depression by 106%, ADHD by 72%, bipolar by 57%, and anorexia by 100%.That’s not all. According to a US National Survey on Drug Use and Health, the prevalence of anxiety in America increased for every age group except those over 50, again, since 2010. Those aged 35 – 49 experienced a 52% increase, those aged 26 – 34 experienced a 103% increase, and those aged 18 – 25 experienced a 139% increase.This data, and much more, was cited by the social psychologist and author Jonathan Haidt, in debuting his latest book, “The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness.” In the book, Haidt examines what he believes is a mental health crisis unique amongst today’s youth, and he proposes that much of the crisis has been brought about by a change in childhood—away from a “play-based” childhood and into a “phone-based” one.This shift, Haidt argues, is largely to blame for the increased rates of anxiety, depression, suicidality, and more.And rather than just naming the problem, Haidt also proposes five solutions to turn things around:Give children far more time playing with other children. Look for more ways to embed children in stable real-world communities.  Don’t give a smartphone as the first phone.Don’t give a smartphone until high school.  Delay the opening of accounts on nearly all social media platforms until the beginning of high school (at least).But while Haidt’s proposals may feel right—his book has spent five weeks on the New York Times Best Seller list—some psychologists disagree.Writing for the outlet Platformer, reporter Zoe Schiffer spoke with multiple behavioral psychologists who alleged that Haidt’s book cherry-picks survey data, ignores mental health crises amongst adults, and over-simplifies a complex problem with a blunt solution.  Today, on the Lock and Code podcast with host David Ruiz, we speak with Dr. Jean Twenge to get more clarity on the situation: Is there a mental health crisis amongst today’s teens? Is it unique to their generation? And can it really be traced to the use of smartphones and social media?According to Dr. Twenge, the answer to all those questions is, pretty much, “Yes.” But, she said, there’s still some hope to be found.“This is where the argument around smartphones and social media being behind the adolescent mental health crisis actually has, kind of paradoxically, some optimism to it. Because if that’s the cause, that means we...
Our Lock and Code host, David Ruiz, has a bit of an apology to make:“Sorry for all the depressing episodes.”When the Lock and Code podcast explored online harassment and abuse this year, our guest provided several guidelines and tips for individuals to lock down their accounts and remove their sensitive information from the internet, but larger problems remained. Content moderation is failing nearly everywhere, and data protection laws are unequal across the world.When we told the true tale of a virtual kidnapping scam in Utah, though the teenaged victim at the center of the scam was eventually found, his family still lost nearly $80,000.And when we asked Mozilla’s Privacy Not Included team about what types of information modern cars can collect about their owners, we were entirely blindsided by the policies from Nissan and Kia, which claimed the companies can collect data about their customers’ “sexual activity” and “sex life.”(Let’s also not forget about that Roomba that took a photo of someone on a toilet and how that photo ended up on Facebook.)In looking at these stories collectively, it can feel like the everyday consumer is hopelessly outmatched against modern companies. What good does it do to utilize personal cybersecurity best practices, when the companies we rely on can still leak our most sensitive information and suffer few consequences? What’s the point of using a privacy-forward browser to better obscure my online behavior from advertisers when the machinery that powers the internet finds new ways to surveil our every move?These are entirely relatable, if fatalistic, feelings. But we are here to tell you that nihilism is not the answer.Today, on the Lock and Code podcast, we speak with Justin Brookman, director of technology policy at Consumer Reports, about some of the most recent, major consumer wins in the tech world, what it took to achieve those wins, and what levers consumers can pull on today to have their voices heard.Brookman also speaks candidly about the shifting priorities in today's legislative landscape. “One thing we did make the decision about is to focus less on Congress because, man, I’ll meet with those folks so we can work on bills, [and] there’ll be a big hearing, but they’ve just failed to do so much.”Tune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and...
A digital form of protest could become the go-to response for the world’s largest porn website as it faces increased regulations: Not letting people access the site.In March, PornHub blocked access to visitors connecting to its website from Texas. It marked the second time in the past 12 months that the porn giant shut off its website to protest new requirements in online age verification.The Texas law, which was signed in June 2023, requires several types of adult websites to verify the age of their visitors by either collecting visitors’ information from a government ID or relying on a third party to verify age through the collection of multiple streams of data, such as education and employment status.PornHub has long argued that these age verification methods do not keep minors safer and that they place undue onus on websites to collect and secure sensitive information.The fact remains, however, that these types of laws are growing in popularity.Today, Lock and Code revisits a prior episode from 2023 with guest Alec Muffett, discussing online age verification proposals, how they could weaken security and privacy on the internet, and whether these efforts are oafishly trying to solve a societal problem with a technological solution.“The battle cry of these people have has always been—either directly or mocked as being—’Could somebody think of the children?’” Muffett said. “And I’m thinking about the children because I want my daughter to grow up with an untracked, secure private internet when she’s an adult. I want her to be able to have a private conversation. I want her to be able to browse sites without giving over any information or linking it to her identity.”Muffett continued:“I’m trying to protect that for her. I’d like to see more people grasping for that.”Alec MuffettTune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen up—Malwarebytes doesn't just talk cybersecurity, we provide it.Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.
Few words apply as broadly to the public—yet mean as little—as “home network security.”For many, a “home network” is an amorphous thing. It exists somewhere between a router, a modem, an outlet, and whatever cable it is that plugs into the wall. But the idea of a “home network” doesn’t need to intimidate, and securing that home network could be simpler than many folks realize.For starters, a home network can be simply understood as a router—which is the device that provides access to the internet in a home—and the other devices that connect to that router. That includes obvious devices like phones, laptops, and tablets, and it includes “Internet of Things” devices, like a Ring doorbell, a Nest thermostat, and any Amazon Echo device that come pre-packaged with the company’s voice assistant, Alexa. There are also myriad “smart” devices to consider: smartwatches, smart speakers, smart light bulbs, don’t forget the smart fridges.If it sounds like we’re describing a home network as nothing more than a “list,” that’s because a home network is pretty much just a list. But where securing that list becomes complicated is in all the updates, hardware issues, settings changes, and even scandals that relate to every single device on that list.Routers, for instance, provide their own security, but over many years, they can lose the support of their manufacturers. IoT devices, depending on the brand, can be made from cheap parts with little concern for user security or privacy. And some devices have scandals plaguing their past—smart doorbells have been hacked and fitness trackers have revealed running routes to the public online.This shouldn’t be cause for fear. Instead, it should help prove why home network security is so important.Today, on the Lock and Code podcast with host David Ruiz, we’re speaking with cybersecurity and privacy advocate Carey Parker about securing your home network.Author of the book Firewalls Don’t Stop Dragons and host to the podcast of the same name, Parker chronicled the typical home network security journey last year and distilled the long process into four simple categories: Scan, simplify, assess, remediate.In joining the Lock and Code podcast yet again, Parker explains how everyone can begin their home network security path—where to start, what to prioritize, and the risks of putting this work off, while also emphasizing the importance of every home’s router:Your router is kind of the threshold that protects all the devices inside your house. But, like a vampire, once you invite the vampire across the threshold, all the things inside the house are now up for grabs.Carey ParkerTune in today.You can also find us on Apple Podcasts, Spotify, and Google Podcasts, plus whatever preferred podcast platform you use.For all our cybersecurity coverage, visit Malwarebytes Labs at malwarebytes.com/blog.Show notes and credits:Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)Licensed under Creative Commons: By Attribution 4.0 Licensehttp://creativecommons.org/licenses/by/4.0/Outro Music: “Good God” by Wowa (unminus.com)Listen...
loading