DiscoverO'Reilly Radar Podcast - O'Reilly Media Podcast
O'Reilly Radar Podcast - O'Reilly Media Podcast
Claim Ownership

O'Reilly Radar Podcast - O'Reilly Media Podcast

Author: O'Reilly Media

Subscribed: 526Played: 188
Share

Description

O'Reilly Radar tracks the technologies and people that will shape our world in the years to come. Each episode of O'Reilly Radar features an interview with an industry thought leader. We also take a step back from the breathless pace of the latest tech news to examine why new developments are important and what they might mean down the road.
42 Episodes
Reverse
The O'Reilly Radar Podcast: The maturity of AI in enterprise, bridging the AI gaps, and what the U.S. can do with $4 trillion.This week, I sit down with Aman Naimat, senior vice president of technology at Demandbase, and co-founder and CTO of Spiderbook. We talk about his project to build a knowledge graph of the entire business world using natural language processing and deep learning. We also talk about the role AI is playing in those companies today and what’s going to drive AI adoption in the future.Here are a few highlights: Surveying AI adoption We were studying businesses for the purpose of helping sales people talk accounts, and we realized we could use our technology to study entire markets. So, we decided to study entire markets of how companies are adopting AI or big data. Really, the way it works is, we built a knowledge graph of how businesses interact with each other, their behavioral signals, who's doing business with whom, who are their partners, customers, suppliers? Who are the influencers, the decision-makers? Who's buying what product? In essence we have built a universal database, if I may, or a knowledge graph, of the entire business world. We use natural language processing and deep learning—the short answer for what data sets we look at is everything. We are now reading the entire business internet, completely unstructured data, from SCC filings to financial regulatory filings to Tweets to every blog post, every job post, every conference visit, every power point, every video. So, it's really pretty comprehensive. We also have a lot of proprietary data around the business world, as to who's reading or viewing what ad, and we triangulate all of that in this graph and do machine learning on top to classify maturity levels of each company out of the 500,000 into how mature they are in AI. How many people do they have working, what are they doing with it, what are the use cases, how much money are they spending. That's how we built the study. Bridging the AI gap between academia and enterprise What will drive adoption in AI, I think, is also investment. The current landscape, according to our study, which was the first data-driven study of the market, was that only a few companies are really investing in it. There's some interest in other places, but companies like Google—the CEO recently came out and said that AI is really how the company will be framed going forward. So, we need more investments, more venture capital investments, more government investments, and that's not just in starting startups, but putting together data sets that data scientists could consume. Public data sets is a huge gap in the market between what is available in academia and what companies like us at Demandbase have—we have a ton of data, proprietary data. So, to be able to have such data available in open source...that could spark new types of use cases. Can we build an AI-based representative democracy? Another use case: the largest set of spend in the world is actually the United States government—$4 trillion; it's a huge market. So, how do you allocate those resources? Is it possible that we can build systems that, in essence, become some sort of an AI-based representative democracy where we can optimize the preferences of individual citizens? Today, most citizens are completely unaware of what's happening at their local government level or state level. If I ask you who's your state senator, you probably don't know. Nobody actually does, yet the state level pretty much has the biggest impact on our lives. They control education, roads, environment, and they have some of the largest budgets— health care. There's suddenly areas where we can try to understand individual preferences automatically, and there's a lot of data—for each bill that is passed, there are thousands and thousands of pages of feedback, text, that AI can process and understand. So, obviously some of this is really far out, but that doesn't mean we can't do something today.
O'Reilly Radar Podcast: David Beyer on AI adoption challenges, the complexities of getting an AI ROI, and the dangers of hype.This week, I sit down with David Beyer, an investor with Amplify Partners. We talk about machine learning and artificial intelligence, the challenges he’s seeing in AI adoption, and what he thinks is missing from the AI conversation.Here are a few highlights: Complexities of AI adoption AI adoption is actually a multifaceted question. It's something that touches on policy at the government level. It touches on labor markets and questions around equity and fairness. It touches on broad commercial questions around industries and how they evolve over time. There's many, many ways to address this. I think a good way to think about AI adoption at the broader, more abstract level of sectors or categories is to actually zoom down a bit and look at what it is actually replacing. The way to do that is to think at the atomic level of jobs and work. What is work? People have been talking about questions of productivity and efficiency for quite some time, but a good way to think of it from the lens of the computer or machine learning is to divide work into four categories. It's a two-by-two matrix of cognitive and manual, cognitive versus manual work, and routine versus non-routine work. The 90s internet and computer revolution, for the most part, tackled the routine work—Spreadsheets and word processing, things that could be specified by an explicit set of instructions. The more interesting stuff that's happening now, and that should be happening over the next decade, is how does software start to impact non-routine, both cognitive and manual, work? Cognitive work is tricky. It can be divided into two categories: things that are analytical (so, math and science and the like) and things that are more interpersonal and social—sales, being a good example. Then with non-routine work, the first instinct is to think about whether the job seems simple to us as people—so, cleaning a room for us, at first blush, seems like something pretty much anyone who's able could do; it's actually incredibly difficult. There's this bizarre, unexpected result that the hard problems are easier to automate, things like logic. The easier problems are incredibly hard to automate—things that require visuospatial orientation, navigating complex and potentially changing terrain. Things that we have basically been programmed over millennia in our brains to accomplish are actually very difficult to do from the perspective of coding a set of instructions into a computer. AI ROI The question I have in my mind is: in the 90s and 2000s, was simply applying computers to business and communication its own revolution? Does machine learning and AI constitute a new category or is machine learning the final complement to extract the productivity out of that initial Silicon revolution, so to speak? There's this economic historian Paul David, out of Oxford, who wrote an interesting thing looking at American factories and how they adapted to electrification because, previously, a lot of them were steam powered. The initial adoption was really with a lack of imagination: they used motors where steam used to be and hadn't really redesigned anything. They didn't really get much of any productivity. It was only when that crop of old managers was replaced with new managers that people fully redesigned the factory to what we now recognize as the modern factory. The question is the technology itself: from our perspective as investors, it's insufficient. You need business process and workplace rethinking. An area of research, as it relates to this model of AI adoption, is how reconstructible is it—is there an index to describe how particular industries or particular workflows or businesses can be remodeled to use machine learning with more leverage? I think that speaks to how those managers in those instances are going to look at ROI. If the payback period for a particular investment is uncertain or really long, we're less likely to adopt it, which is why you're seeing a lot of pickup of robots in factories. You can specify and drive the ROI; the payback period for that is coming down because it's incredibly clear, well-defined. Another industry is, for example, using machine learning in a legal setting for a law firm. There are parts of it—for example, technology assisted review—where the ROI's pretty clear. You can measure it in time saved. Other technologies that help assist in prediction or judgment for, say, higher-level thinking, the return on that is pretty unclear. A lot of the interesting technologies coming out these days—from, in particular, deep learning—enable things that operate at a higher level than we're used to. At the same time, though, they're building products around that that do relatively high-level things that are hard to quantify. The productivity gains from that are not necessarily clear. The dangers of AI hype One thing I'd say, rather than missing from the AI conversation, is something that there's too much of: I think hype is one of them. Too many businesses now are pitching AI almost as though it's batteries included. That's dangerous because it's going to potentially lead to over-investment in things that overpromise. Then, when they under-deliver, it has a deflationary effect on people's attitudes toward the space. It almost belittles the problem itself. Not everything requires the latest whiz-bang technology. In fact, the dirty secret of machine learning—and, in a way, venture capital—is so many problems could be solved by just applying simple regression analysis. Yet, very few people, very few industries do the bare minimum.
The O'Reilly Radar Podcast: Turning personalization into a two-way conversation.In this week's Radar Podcast, O’Reilly’s Mac Slocum chats with Sara Watson, a technology critic and writer in residence at Digital Asia Hub. Watson is also a research fellow at the Tow Center for Digital Journalism at Columbia and an affiliate with the Berkman Klein Center for Internet and Society at Harvard. They talk about how to optimize personalized experience for consumers, the role of machine learning in this space, and what will drive the evolution of personalized experiences.Here are a few highlights: Accountability across the life cycle of data One of the things I'm paying a lot of attention to is how the machine learning application of this changes what can and can't be explained about personalization. One of the things I'm really looking for as a consumer is to say, "Okay. Why am I seeing this?" That's really interesting to me. I think more and more we're not going to be able to answer that question. Even so, now I think a lot of times we can only provide one piece of the answer as to why I'm seeing this ad, for example. It's really going to get far more complicated, but at the same time, I think there's going to be a lot more need for accountability across that life cycle of data, whether we're talking about following data between the data brokers and the browser history, and my kind of preference model as a consumer. There's got to at least be a little bit more accountability across that pattern. It's obviously going to be a very complicated thing to solve. ...Honestly, I think accountability is going to be demand oriented, whether that is from a policy side or a consumer side. People have started to understand there is something happening in the news feed. It's not just a purely objective timeline. It's not linear. Just that level of knowledge has changed the discussion. That's why we're talking about the objectivity of Facebook's news feed and whether or not you're seeing political news on one side or the other, or the trending topics. Being part of the larger discussion, even if that's not reaching a huge range of consumers, is making consumers more educated toward caring about these things. Empowering the consumer The ideal is not far off. It's just that in practice we're not there yet. I think a lot of people would probably agree that ideal personalization is about relevancy. It's about being meaningful to the consumer and providing something that's valuable. I also think it has to do with being empowering, so not just pushing something onto the consumer, like we know what's best for you or we're anticipating your needs, but really giving them the opportunity to explore what they need and make choices in a smart way. Shaping the conversation One of the things we talk about on the data side of things is 'targeting' people. Think about that word. It's like targeting? Putting a gun to a consumer's head? When you think about it that way, it's like, okay, yeah, this is a one-way conversation. This is not really giving any agency to the person who is part of that conversation. I'm really interested in trying to open up that dialog in a way that's beneficial to all parties involved. ...I think a lot about the language that we use to talk about this stuff. I've written about the metaphors we use to talk about data—with metaphor examples in talking about data lakes, and data's the new oil, and all these kinds of industrial-heavy analogies that really put the focus on the people with the power and the technology and the industry side of things, without necessarily supporting the human side of things. ...It shapes whatever it is you think you're doing, either as a marketer or as the platform that's making those opportunities possible. It's not very sensitive to the subject, really.
The O'Reilly Radar Podcast: The value humans bring to AI, guaranteed job programs, and the lack of AI productivity.This week, I sit down with Tom Davenport. Davenport is a professor of Information Technology and Management at Babson College, the co-founder of the International Institute for Analytics, a fellow at the MIT Center for Digital Business, and a senior advisor for Deloitte Analytics. He also pioneered the concept of “competing on analytics.” We talk about how his ideas have evolved since writing the seminal work on that topic, Competing on Analytics: The New Science of Winning; his new book Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, which looks at how AI is impacting businesses; and we talk more broadly about how AI is impacting society and what we need to do to keep ourselves on a utopian path.Here are some highlights: How AI will impact jobs In terms of AI impact, there are various schools of thought. Tim O'Reilly's in the very optimistic school. There are other people in the very pessimistic school, thinking that all jobs are going to go away, or 47% of jobs are going to go away, or we'll have rioting in the streets, or our robot overlords will kill us all. I'm kind of in the middle, in the sense that I do think it's not going to be an easy transition for individuals and businesses, and I think we should certainly not be complacent about it and assume the jobs will always be there. But I think it's going to take a lot longer than people usually think to create new business processes and new business models and so on, and that will mean that the jobs will largely continue for long periods. One of my favorite examples is bank tellers. We had about half a million bank tellers in the U.S. in 1980. Along come ATMs and online banking, and so on. You'd think a lot of those tasks would be replaced. We have about half a million bank tellers in the United States in 2016, so... Nobody would recommend it as a growth career, and it is slowly starting to decline, but I think we'll see that in a lot of different areas. And then I think there will be a lot of good jobs working alongside these machines, and that's really the primary focus of our book [Only Humans Need Apply: Winners and Losers in the Age of Smart Machines] was identifying five ways that humans can add value to the work of smart machines. The appeal of augmentation Think about what is it that humans bring to the party. Automation, in a way, is a kind of a downward spiral. If everybody's automating something in an industry, the prices decline, and margins decline, and innovation is harder because you’ve programmed this system to do things a certain way. So, as a starting assumption, I think augmentation is a much more appealing one for a lot of organizations than, ‘We're going to automate all the jobs away.’ Guaranteed job programs If I were a leader in the United States, I would say the people who are going to need the most help are not so much the knowledge workers who are kind of used to learning new stuff and transforming themselves, to some degree, but the long-distance truck drivers. We have three million in the United States, and I think you'll probably see autonomous trucks on the interstate, maybe in special lanes or something, before we see it in most city, before we see autonomous cars in most cities. That's going to be tougher, because truck drivers probably, as a class, are not that comfortable in transforming themselves by taking courses here and there, and learning the skills they need to learn. So in that case, maybe we will need some guaranteed income programs—or, I'd actually prefer to see guaranteed job programs. There's some evidence that if you have a guaranteed income, you think, ‘Well, maybe they'll take up new sports or artistic pursuits,’ or whatever. Turns out, what most people do when they have a guaranteed income is, they sleep more and they watch TV more, so kind of not good for society in general. Guaranteed job programs worked in the Great Depression for the Civilian Conservation Corps, and artists and writers and so on, so we could do something like that. Whether this country would ever do it is not so clear. The (lacking) economic value of AI In a way, what’s missing in the AI conversation is the same thing I saw missing when I started working in analytics: it's a very technical conversation, for the most part. Not that much yet on how it will change key business and organizational processes—how do we get some productivity out of it? I mean, we desperately need more productivity in this country. We haven't increased it much over the past several years—a great example is health care. We have systems that can read radiological images and say, ‘You need a biopsy, because this looks suspicious,’ in a prostate cancer or breast cancer image, or, ‘This pathology image doesn't look good. You need a further biopsy or something, a more detailed investigation,’ but we haven't really reduced the number of radiologists or pathologists at all, so what's the economic value? We've had these for more than a decade. What's the economic value if we're not creating any more productivity? I think the business and social and political change is going to be a lot harder for us to address than the technical change, and I don't think we're really focusing much on that. I mean, there's no discussion of it in politics, and not yet enough in the business context, either.
The O’Reilly Radar Podcast: AI on the hype curve, imagining nurturing technology, and gaps in the AI conversation.This week, I sit down with anthropologist, futurist, Intel Fellow, and director of interaction and experience research at Intel, Genevieve Bell. We talk about what she’s learning from current AI research, why the resurgence of AI is different this time, and five things that are missing from the AI conversation.Here are some highlights: AI’s place on the wow-ahh-hmm curve of human existence I think in some ways, for me, the reason of wanting to put AI into a lineage is many of the ways we respond to it as human beings are remarkably familiar. I'm sure you and many of your viewers and listeners know about the Gartner Hype Curve, the notion of, at first you don’t talk about it very much, then the arc of it's everywhere, and then it goes to the valley of it not being so spectacular until it stabilizes. I think most humans respond to technology not dissimilarly. There's this moment where you go, 'Wow. That’s amazing' promptly followed by the 'Uh-oh, is it going to kill us?' promptly followed by the, 'Huh, is that all it does?' It's sort of the wow-ahh-hmm curve of human existence. I think AI is in the middle of that. At the moment, if you read the tech press, the trade presses, and the broader news, AI's simultaneously the answer to everything. It's going to provide us with safer cars, safer roads, better weather predictions. It's going to be a way of managing complex data in simple manners. It's going to beat us at chess. On the one hand, it's all of that goodness. On the other hand, there are being raised both the traditional fears of technology: is it going to kill us? Will it be safe? What does it mean to have autonomous things? What are they going to do to us? Then the reasonable questions about what models are we using to build this technology out. When you look across the ways it's being talked about, there are those three different factors. One of excessive optimism, one of a deep dystopian fear, and then another starting to run a critique of the decisions that are being made around it. I think that’s, in some ways, a very familiar set of positions about a new technology. Looking beyond the app that finds your next cup of coffee I sometimes worry that we imagine that each generation of new technology will somehow mysteriously and magically fix all of our problems. The reality is 10, 20, 30 years from now, we will still be worrying about the safety of our families and our kids, worrying about the integrity of our communities, wanting a good story to keep us company, worrying about how we look and how we sound, and being concerned about the institutions in our existence. Those are human preoccupations that are thousands of years deep. I'm not sure they change this quickly. I do think there are harder questions about what that world will be like and what it means to have the possibility of machinery that is much more embedded in our lives and our world, and about what that feels like. In the fields that I come out of, we've talked a lot since about the same time as AI about human computer interactions, and they really sat inside the paradigm. One about what should we call a command-and-control infrastructure. You give a command to the technology, you get some sort of piece of answer back; whether that’s old command prompt lines or Google search boxes, it is effectively the same thing. We're starting to imagine a generation of technology that is a little more anticipatory and a little more proactive, that’s living with us—you can see the first generation of those, whether that's Amazon's Echo or some of the early voice personal assistants. There's a new class of intelligent agents that are coming, and I wonder sometimes if we move from a world of human-computer interactions to a world of human-computer relationships that we have to start thinking differently. What does it mean to imagine technology that is nurturing or that has a care or that wants you to be happy, not just efficient, or that wants you to be exposed to transformative ideas? It would be very different than the app that finds you your next cup of coffee. There’s a lot of room for good AI conversations What's missing from the AI conversation are the usual things I think are missing from many conversations about technology. One is an awareness of history. I think, like I said, AI doesn’t come out of nowhere. It came out of a very particular set of preoccupations and concerns in the 1950s and a very particular set of conversations. We have, in some ways, erased that history such that we forget how it came to be. For me, I think a sense of history is missing. As a result of that, I think more attention to a robust interdisciplinarity is missing, too. If we're talking about a technology that is as potentially pervasive as this one and as potentially close to us as human beings, I want more philosophers and psychologists and poets and artists and politicians and anthropologists and social scientists and critics of art—I want them all in that conversation because I think they're all part of it. I worry that this just becomes a conversation of technologists to each other about speeds and feeds and their latest instantiation, as opposed to saying, if we really are imagining a form of an object that will be in dialogue with us and supplemental and replacing us in some places, I want more people in that conversation. That's the second thing I think is missing. I also think it's emerging, and I hear in people like Julia Ng and my colleagues Kate Crawford and Meredith Whitacre an emerging critique of it. How do you critique an algorithm? How do you start to unpack a black-boxed algorithm to ask the questions about what pieces of data are they waging against what and why? How do we have the kind of dialogue that says, sure we can talk about the underlying machinery, but we also need to talk about what's going into those algorithms and what does it mean to train objects. For me, there's then the fourth thing, which is: where is theory in all of this? Not game theory. Not theories about machine learning and sequencing and logical decision-making, but theories about human beings, theories about how certain kinds of subjectivities are made. I was really struck in reading many of the histories of AI, but also of the contemporary work, of how much we make of normative examples in machine learning and in training, where you're trying to work out the repetition—what's the normal thing so we should just keep doing it? I realized that sitting inside those are always judgements about what is normal and what isn't. You and I are both women. We know that routinely women are not normal inside those engines. There's something about what would it mean to start asking a set of theoretical questions that come out of feminist theory, out of Marxist theory, out of queer theory, critical race theory about what does it mean to imagine normal here and what is and what isn't. Machine learning people would recognize this as the question of how do you deal with the outliers. I think my theory would be: what if we started with the outliers rather than the center, and where would that get you? I think the fifth thing that’s missing is: what are the other ways into this conversation that might change our thinking? As anthropologists, one of the things we're always really interested in is, can we give you that moment where we de-familiarize something. How do you take a thing you think you know and turn it on it's head so you go, 'I don’t recognize that anymore'? For me, that’s often about how do you give it a history. Increasingly, I realize in this space there's also a question to ask about what other things have we tried to machine learn on—so, what other things have we tried to use natural language processing, reasoning, induction on to make into supplemental humans or into things that do tasks for us? Of course, there's a whole category of animals we've trained that way—carrier pigeons, sheep dogs, bomb sniffing dogs, Coco the monkey who could sign. There's a whole category of those, and I wonder if there's a way of approaching that topic that gets us to think differently about learning because that’s sitting underneath all of this, too. All of those things are missing. When you've got that many things missing, that’s actually good. I means there's a lot of room for good conversations.
The O'Reilly Radar Podcast: The art and science of fostering serendipity skills.On this week's episode of the Radar Podcast, O'Reilly's Mac Slocum chats with award-winning author Pagan Kennedy about the art and science of serendipity—how people find, invent, and see opportunities nobody else sees, and why serendipity is actually a skill rather than just dumb luck.Here are some highlights: The roots of serendipity It's really helpful to go back to the original definition of serendipity, which arose in a very whimsical, serendipitous way back in the 1700s. There was this English eccentric named Horace Walpole who was fascinated with a fairy tale called 'The Three Princes of Serendip.' In this fairy tale, the three princes are Sherlock Homes-like detectives who have amazing skills, forensic skills. They can see clues that nobody else can see. Walpole was thinking about this and very delighted with this idea, so he came up with this word 'serendipity.' In that original definition, Walpole really was talking about a skill, the ability to find what we're not looking for, especially really useful clues that lead to discoveries. In the intervening couple hundred years, the word has almost migrated to the opposite meaning, where we just talk about dumb luck. ... I'm not against that meaning, but I think it's really useful to go back, especially in the age of big data, to go back to that original meaning and talk again about this as a skill. The interplay between technology, the human mind, and serendipity There's a really interesting interplay between tools and the human mind and serendipity. If you look at the history of science, when something like the telescope or the microscope appears, there are waves of discovery because these tools have made things that were formerly invisible visible. When patterns that you couldn't see before become visible, of course, people, smart people, creative people, find those patterns and begin working with them. I think the data tools and all the new tools that we've got are amazing because they make patterns visible that we wouldn't have been able to see before; but in the end, they're tools, and you've got to have a human mind at other end of that tool. If the tool throws up a really important anomaly or pattern, you've got to have a human being there who not only sees it, recognizes it, but also gets super excited about it, and defends it and explores it and figures, and gets excited about an opportunity there. Serendipity as a highly emotional process A class of people who tend to be very good at finding, inventing, and seeing opportunities that nobody else sees are surgeons. I'd really like to emphasize that this kind of problem solving or this kind of pattern finding is not just intellectualizing. It can be very emotional. Surgeons, when they have a problem, somebody dies and they stay up at 3 a.m. thinking about what went wrong with their tools. It's that kind of worrying that is often involved in this kind of search for patterns or opportunities nobody else is seeing. It's not just an intellectual process, but a highly emotional one where you're very worried. This kind of process might not be very good for your health, but it's very good for your creativity, that kind of replaying. Not just noticing at the moment what's going wrong or what might be in the environment that nobody else is seeing, but going over it in your head and thinking about alternative realities.
The O'Reilly Radar Podcast: Designing for mainstream AI, natural language interfaces, and the importance of reinventing yourself.This week we're featuring a conversation from earlier this year—O'Reilly's Mary Treseler chats with Giles Colborne, managing director of cxpartners. They talk about the transformative effects of AI on design, designing for natural language interactions, and why designers need to nurture the ability to reinvent themselves.The conditions are ripe for AI to enter the mainstream Mobile is the platform people want to use. ... That means that a lot of businesses are seeing their traffic shift to a channel that actually doesn't work as well, but people would like it to work well. At the same time, mobile devices have become incredibly powerful. Organizations are suddenly finding themselves flooded with data about user behavior. Really interesting data. It's impossible for a person to understand, but if you have a very powerful device in the user's hand, and you have powerful computers than can crunch this data and shift it around quickly, suddenly, technologies like AI become really important, and you can start to predict what the user might want. Therefore, you can remove a little bit of the friction from mobile. Looking around at this landscape a couple years ago, it's obvious that is going to be where something interesting happens soon. Sure enough, you can see that everywhere now. The interest in AI is phenomenal. At its simplest, the crudest application of AI is simply that: to shortcut user input. That's a very simple application, but it's incredibly powerful. It has a transformative effect. That's why I think AI is really important, is why I think its time is now. That's why I think you're starting to see it everywhere. The conditions are ripe for AI to move from being an academic curiosity into what it is now: mainstream. Designing natural language interfaces One of the things we've been working on a lot recently is designing around chat interfaces, learning natural language interfaces, NLIs. That's a form of algorithms, a really kind of complex form. Essentially, a lot of the features that you find in other forms of AI design are there in designing natural language interfaces. As we've been exploring that space, obviously our instinct is to go back to the psychology of language and really study that so that we're building it in, where we're understanding what we're hearing and trying to model artificial conversations. That's led us very quickly to realize that we need tools that support those sorts of language structures as well. We've been working with a company called Artificial Solutions, that provided us with wonderful tools that enables us to very rapidly model—and almost prototype in the browser—natural language interactions much faster than writing out scripts or running through Post-It notes. You can very quickly see, 'This is where this conversation feels awkward; this is where this conversation is breaking down.' I think that ability to rapidly prototype is incredibly important. Embracing reinvention I think anybody working today needs to be endlessly curious to keep up with the speed with which technology forces us to reinvent ourselves—AI is a great example of that; there's going to be an awful lot of roles that are going to need to be reinvented as AI support tools become mainstream. That ability to be curious and to reinvent yourself is really important. The ability to see things from multiple points of view simultaneously is important as well. We've hired some great people from media backgrounds, and they very naturally have that ability to shift between the actor, if you like—which in our case is the interactive thing that we're designing—the audience, and the author, and are able to think about each of those viewpoints. As you're learning through a design process, you need to be able to hold each of those viewpoints in your head simultaneously. That's really important.
The O'Reilly Radar Podcast: Imbuing robots with magic, eschewing deception in AI, and problematic assumptions of human-taught reinforcement learning.In this episode, I sit down with Brad Knox, founder and CEO of Emoters, a startup building a product called bots_alive—animal-like robots that have a strong illusion of life. We chat about the approach the company is taking, why robots or agents that pass themselves off as human without any transparency should be illegal, and some challenges and applications of reinforcement learning and interactive machine learning.Here are some links to things we talked about and some highlights from our conversation: Links: bots_alive Bot Party Knox's article: Framing reinforcement learning from human reward: Reward positivity, temporal discounting, episodicity, and performance Knox's article: Power to the People: The Role of Humans in Interactive Machine Learning bots_alive's NSF award: Design, deployment, and algorithmic optimization of zoomorphic, interactive robot companions Creating a strong illusion of life I've been working on a startup company, Emoters. We're releasing a product called bots_alive, hopefully in January, through Kickstarter. Our big vision there is to create simple, animal-like robots that have a strong illusion of life. This immediate product is going to be a really nice first step in that direction. .... If we can create something that feels natural, that feels like having a simple pet—maybe not for a while anything like a dog or cat, but something like an iguana, or a hamster—where you can observe it and interact with it, that it would be really valuable to people. The way we're creating that is going back to research I did when I was at MIT with Cynthia Breazeal and a master's student, Sam Spaulding—machine learning from demonstration on human-improvised puppetry. Our hypothesis for this product is that if you create an artificially intelligent character using current methods, you sit back and think, 'Well, in this situation, the character should do this.' For example, a traditional AI character designer might write the rule for an animal-like robot that if a person moves his or her hand quickly, the robot should be scared and run away. That results in some fairly interesting characters, but our hypothesis is that we'll get much more authentic behaviors, something that really feels real, if we first allow a person to control the character through a lot of interactions. Then, take the records and the logs of those interactions, and learn a model of the person. As long as that model is good fidelity—it doesn't have to be perfect, but captures with pretty good fidelity the puppeteer—and the puppeteer is actually creating something that would be fun to observe or interact with, then we're in a really good position. … It's hard to sit back and write down on paper why humans do the things we do, but what we do in various contexts is going to be in the data. Hopefully, we'll be able to learn that from human demonstration and really imbue these robots with some magic. A better model for tugging at emotions The reason I wrote that Tweet [Should a robot or agent that widely passes for human be illegal? I think so.] is that if a robot or an agent—you could think of an agent as anything that senses the state of its environment, whether it's a robot or something like a chat bot, just something you're interacting with— if it can pass as human and it doesn't give some signal or flag that says, 'Hey, even if I appear human, I'm not actually human,' that really opens the door to deception and manipulation. For people who are familiar with the Turing Test—which is by far the most well-known test for successful artificial intelligence—the issues I have with it is that, ultimately, it is about deceiving people, about them not being able to tell the difference between an artificially intelligent entity and a human. For me, one real issue is that, as much as I'm generally a believer in capitalism, I think there's room for abuse by commercial companies. For instance, it's hard enough when you're walking down the street and a person tries to get your attention to buy something or donate to some cause. Part of that is because it's a person and you don't want to be rude. When we create a large number—eventually, inexpensive fleets—of human-like or pass-for-human robots that can also pull on your emotions in a way that helps some company, I think the negative side is realized at that point. ... How is that not a contradiction [to our company's mission to create a strong illusion of life]? The way I see illusion of life (and the way we're doing it at bots_alive) is very comparable to cartoons or animation in general. When you watch a cartoon, you know that it's fake. You know that it's a rendering, or a drawing, or a series of drawings with some voice-over. Nonetheless, if you're like most people, you feel and experience these characters in the cartoon or the animation. ... I think that's a better model, where we know it's not real but we can still feel that it's real to the extent that we want to. Then, we have a way of turning it off and we're not completely emotionally beholden to these entities. Problematic assumptions of human-taught reinforcement learning I was interested in the idea of human training of robots in an animal training way. Connecting that to reinforcement learning, the research question we posed was: instead of the reward function being coded by an expert in reinforcement learning, what happens if we instead give buttons or some interface to a person who knows nothing about computer science, nothing about AI, nothing about machine learning, and that person gives the reward and punishment signals to an agent or a robot? Then, what algorithmic changes do we need to make the system learn what the human is teaching the agent to do? If it had turned out that the people in the study had not violated any of the assumptions of reinforcement learning when we actually did the experiments, I think it wouldn't have ended up being an interesting direction of research. But this paper dives into the ways that people did violate, deeply violate, the assumptions of reinforcement learning. One emphasis of the paper is that people tend to have a bias toward giving positive rewards. A large percentage of the trainers we had in our experiments would give more positive rewards than punishment—or in reinforcement learning terms, 'negative rewards.' We found that people were biased toward positive rewards. The way reinforcement learning is set up is, a lot of the reinforcement learning tasks are what we call 'episodic'—roughly, what that means is that when the task is completed, the agent can't get further reward. Its life is essentially over, but not in a negative way. When we had people sit down and give reward and punishment signals to an agent trying to get out of a maze, they would give a positive reward for getting closer to the goal, but then this agent would learn, correctly (at least by the assumptions of reinforcement learning), that if it got to the goal, (1) it would get no further reward, and (2) if it stayed in the world that it's in, it would get a net positive reward. The weird consequence is that the agent learns that it should never go to the goal, even though that's exactly what these rewards are supposed to be teaching it. In this paper, we discussed that problem and showed the empirical evidence for it. Basically, the assumptions that reinforcement learning typically makes are really problematic when you're letting a human give the reward.
The O'Reilly Radar Podcast: Big data for security, challenges in fraud detection, and the growing complexity of fraudster behavior.This week, I sit down with Fang Yu, cofounder and CTO of DataVisor, where she focuses on big data for security. We talk about the current state of the fraud landscape, how fraudsters are evolving, and how data analytics and behavior analysis can help defend against—and prevent—attacks.Here are some highlights from our chat: Challenges in using supervised machine learning for fraud detection In the past few years, machine learning has taken a big role in fraud detection. There are a number of supervised machine learning techniques and breakthroughs, especially for voice, image recognition, etc. There's also an application for machine learning to detect fraud, but it's a little challenging because supervised machine learning needs labels. It needs to know what good users and bad users look like, and to know what good behavior is, what bad behavior is; the problem in many fraud cases is that attackers constantly evolve. Their patterns change very quickly, so in order to detect an attack, you need to know they will do next. That is ultimately hard, and in some cases—for example, financial transactions—it is too late. For supervised machine learning, you will have a charge back label from the bank because someone sees their credit card got abused and they called the bank. That's how you get the label. But that happens well after the actual transaction takes place, sometimes even months later, and the damage is already done. And moving forward, by the time you have a model to train to prevent it from happening again, the attacker has already changed his or her behavior. Supervised machine learning is great, but when applied to security, you need a quicker and more customized solution. An unsupervised machine learning approach to identify sleeper cells At DataVisor, we actually do things differently from the traditional rule-based or supervised machine learning-based approaches. We do unsupervised detection, which does not need labels. So, at a high-level, today's modern attackers do not use a single account to conduct fraud. If they have a single account, the fraud they can conduct is very limited. What they usually do is construct an army of fraud accounts, and then either do a mass registration or conduct account takeovers, then each of them will commit a little fraud. They can do spamming, they can do phishing, they can do all types of different bad activities. But together, because they have many accounts, they conduct attacks at a massive scale. For DataVisor, the approach we take is called an unsupervised approach. We do not look at individual users anymore. We look at all the users in a holistic view and uncover their correlations and linkages. We use graph analysis and clustering techniques, etc., to identify these fraudsters' rings. We can identify them even before they have done anything, or while they are sleeping, so we call them "sleeper cells." The big payoff of fraudulent faking Nowadays, we actually see fraud becoming pretty complex and even more lucrative. For example, if you look at e-commerce platforms, they sometimes offer reviews. They let users rate, like, and write reviews about products. And all of these can be leveraged by the fraudsters—they can write fake reviews and incorporate bad links in the writeups in order to promote their own products. So, they do a lot of fake likes to promote. Now, we also see a new trend going from the old days of having fake impressions, fake clicks now to actual fraudulent installs. For example, in the old days, when a gaming company had a new game coming out, they would purchase users to play these games—they would pay people like $50 dollars to play an Xbox game. Now, many of the games are free, but they need to drive installs to improve their rank in app stores. These gaming providers rely on app marketing, purchasing the users from different media sources, which can be pretty expensive—a few dollars per install. So, the fraudsters start to emulate the users and download these games. They are pretending they are media sources and cashing in by just downloading and playing the games. That payoff is 400 times more than that of a fake click or impression. The future of fraudsters and fraud detection Fraudsters are evolving to look more like real users, and it's becoming more difficult to detect them. We see them incubate for a long time. We see them using cloud to circumvent IP blacklists. We see them skirting two-factor authentication. We see them opening apps, making purchases, and doing everything a real, normal user does. They are committing fraud at a huge scale across all industries, from banking and money laundering to social, and the payoff for them is equally as massive. If they are evolving, we need to evolve, too. That's why new methods, such as unsupervised machine learning, are so critical to staying ahead of the game.
The O'Reilly Radar Podcast: Thinking critically about AI, modeling language, and overcoming hurdles.This week, I sit down with Hilary Mason, who is a data scientist in residence at Accel Partners and founder and CEO of Fast Forward Labs. We chat about current research projects at Fast Forward Labs, adoption hurdles companies face with emerging technologies, and the AI technology ecosystem—what's most intriguing for the short term and what will have the biggest long-term impact.Here are some highlights: Missing wisdom There are a few things missing [from the AI conversation]. I think we tend to focus on the hype and eventual potential without thinking critically about how we get there and what can go wrong along the way. We have a very optimistic conversation, which is something I appreciate. I'm an optimist, and I'm very excited about all of this stuff, but we don't really have a lot of critical work being done in things like how do we debug these systems, what are the consequences when they go wrong, how do we maintain them over time, and operationalize and monitor their quality and success, and what do we do when these systems infiltrate pieces of our lives where automation may have highly negative consequences. By that, I mean things like medicine or criminal justice. I think there's a big conversation that is happening, but the wisdom still is missing. We haven't gotten there yet. Making the impossible possible I'm particularly intrigued at the moment by being able to model language. That's something where I think we can't yet imagine the ultimate applications of these things, but it starts to make things that previously would have seemed impossible possible, things like automated novel writing, poetry, things that we would like to argue are purely human creative enterprises. It starts to make them seem like something we may one day be able to automate, which I'm personally very excited about. The impact question is a really good one, and I think it is not one technology that will have that impact. It's the same reason we're starting to see all these different AI products pop up. It's the ensemble of all of the techniques that are falling under this umbrella together that is going to have that kind of impact and enable applications like the Google Photos app, which is my favorite AI product, or self-driving cars or things like Amazon's Alexa, but actually smarter. That's a collection of different techniques. Making sentences and languages computable We've done a project in automated summarization that I'm very excited about—that is applying neural networks to text, where you can put in a single article and it will extract; this is extractive summarization. It extracts sentences from that article that, combined together, contain the same information in the article as a whole. We also have another formulation of the problem, which is multi-document summarization, where we apply this to Amazon product reviews. You can put in 5,000 reviews, and it will tell you these reviews tend to cluster in these 10 ways, and for each cluster, here's the summary of that cluster review. It gives you the capability to read or understand thousands of documents very quickly. ... I think we're going to see a ton of really interesting things built on the techniques that underlie that. It's not just summarization, but it's making sentences and languages computable. Adoption hurdles I think the biggest adoption hurdle [for emerging technologies]—there are two that I'll say. The one is that sometimes these technologies get used because they're cool, not because they're useful. If you build something that's not useful, people don't want to use it. That can be a struggle. The second thing is that people are generally resistant to change. When you're in an organization and you're trying to advocate for the use of a new technology to make the organization more efficient, you will likely run into friction. In those situations, it's a matter of time and making the people who are most resistant look good.
O'Reilly Radar Podcast: SNAFU Catchers, knowing how things work, and the proper response to system discrepancies.In this week's episode, O'Reilly's Mac Slocum sits down with Richard Cook and David Woods. Cook is a physician, researcher, and educator, who is currently a research scientist in the Department of Integrated Systems Engineering at Ohio State University, and emeritus professor of health care systems safety at Sweden’s KTH. Woods also is a professor at Ohio State University and is leading the Initiative on Complexity in Natural, Social, and Engineered Systems, and he's the co-director of Ohio State University’s Cognitive Systems Engineering Laboratory. They chat about SNAFU Catchers; anomaly response; and the importance of not only understanding how things fail, but how things normally work.Here are a few highlights: Catching situations abnormal Cook: We're trying to understand how Internet-facing businesses manage to handle all the various problems, difficulties, and opportunities that come along. Our goal is to understand how to support people in that kind of work. It's a fast changing world, mostly that appears on the surface to be smoothly functioning, but in fact, as people who work in the industry know, is always struggling with different kinds of breakdowns, and things that don't work correctly, and obstacles that have to be addressed. Snafu Catchers refers to the idea that people are constantly working to collect, and respond to, all the different kinds of things that foul up the system, and that that's the normal situation, not the abnormal one. Woods: [SNAFU] is a coinage from the grunts in World War II on our side, on the winning side. Situation normal, so the normal situation is all fucked up, right? That the pristine, smooth work is designed, follow the plan, put in automation, everything is great, isn't really the way things work in the real world. It appears that way from a distance, but on the ground, there are gaps, uncertainties, conflicts, and trade offs. Those are normal—in fact, they're essential. They're part of this universe and the way things work. What that means is, there is often a breakdown, a limit in terms of how much adaptive capability is built into the system, and we have to add to that. Because surprise will happen, exceptions will happen, anomalies will happen. Where does that extra capacity to adapt to surprise come from? That's what we're trying to understand, and focus on, not the SNAFU—that's just normal. We're focusing on the catching: what are the processes, abilities, and capabilities of the teams, groups, and organizational practices that help you catch SNAFU's. That's about the anticipation and preparation, so you can respond quickly and directly when the surprise occurs. Know how things work, not just how they fail Cook: There's an old surgical saying that, 'Good results come from experience, and experience comes from bad results.' That's probably true in this industry as well. We learn from experience by having difficulties and solving those sorts of problems. We live in an environment in which people are doing this as apprenticeships very early on in their life, and the apprenticeship gives them opportunities to experience different kinds of failure. Having those experiences tells them something about the kinds of activities that they should perform, once they sense a failure is occurring. Also, some of the different kinds of things that they can do to respond to different kinds of failures. Most of what happens in this, is a combination of understanding how the system is working, and understanding what's going on that suggests that it's not working in the right sort of way. You need two kinds of knowledge to be able to do this. Not just knowledge of how things fail, but also knowledge of how things normally work. No anomaly is too small to ignore Woods: I noticed that what's interesting is, you have to have a pretty good model of how it's supposed to work. Then you start getting suspicious. Things don't quite seem right. These are the early signals, sometimes called weak signals. These are easy to discount away. One of the things you see, and this happened in [NASA] mission control, for example, in its heyday, all discrepancies were anomalies until proven otherwise. That was the cultural ethos of mission control. When you lose that, you see people discounting, 'Oh, that discrepancy isn't going to really matter. I've got to get this other stuff done,' or, 'If I foul it up, some other things will start happening.' What we see in successful anomaly response is this early ability to notice something starting to go wrong, and it is not definitive, right? If it was definitive, then it would cross some threshold, it would activate some response, it would pull other resources in to deal with it, because you don't want it to get out of control. The preparation for, and success at, handling these things is to get started early. The failure mode is, you're slow and stale—you let it cook too long before you start to react. You can be slow and stale, and the cascade can get away from you, you lose control. When teams or organizations are effective at this, they notice things are slightly out, and then pursue it. Dig a little deeper, follow up, test it, bring some other people to bare with different or complimentary expertise. The don't give up real quick and say, 'That discrepancy is just noise and can be ignored.' Now, most of the time, those discrepancies might probably be noise, right? Isn't worth the effort. But sometimes those are the beginnings of something that's going to threaten to cascade out of control.
The O'Reilly Radar Podcast: Prediction algorithms, cognitive biases, and how our brains come online.On this week's episode, I chat with Sam Wang, professor of neuroscience and molecular biology at Princeton. Wang is also a co-founder of the Princeton Election Consortium, a site focused on analyzing and predicting U.S. national elections. We talk about the site's prediction algorithm and this crazy election cycle, and the role neuroscience may have played. We also talk about the current research Wang and his team are working on, the U.S. BRAIN Initiative, and the powerful role governments play in academic research.Here are some highlights: Predicting elections What we do at the [Princeton Election Consortium] site is we collect publicly available polls. This is a website that's at election.princeton.edu. We take those polls and then feed them into a script that I've written and scripts that my students have written, and on an automated basis, we take those polling data and we turn the polling data into a clear, sharp snapshot of exactly where the presidential race appears to be at any given moment, on any day during the campaign. It's sort of a tracking index that says what would happen on election today, and we do it using what I would call optimal statistical tools. ... The Princeton Election Consortium is open source, and so anybody can download the scripts. They're written in MATLAB and in Python, and there are some shell scripts. Anybody can download the stuff and run it for themselves. ... We take all the available state polls for a given state, say Virginia, for example, which is a competitive state, and we take the median margin for that state between Hillary Clinton and Donald Trump, the median margin of all the polls, and that gives the best estimate of what the likely margin is going to be in the election. Then we use the spread in that set of data to figure out how probable it is that Hillary Clinton or Donald Trump is in the lead. That's one probability. Then we do that over and over again for all 51 races, the 50 states plus the District of Columbia. We do it over and over again, and in each case there's an outcome that's like a coin toss, except that the coin toss is worth some number of electoral votes, and then we combine all those probabilities using just some simple math trick, a function in MATLAB that's called convolve. We turn all those probabilities into an exact distribution that has a lot of sharp peaks in it corresponding to particular combinations of states. It's anywhere from zero to 538 electoral votes for Hillary Clinton and the same for Donald Trump. All that's done automatically, and then that tells us how conditions are today. Then on top of that, we add some assumptions about where things are likely to go by Election Day, and that's a random drift factor. That random drift factor gives us a view of what is likely to happen on November 8. Getting to the truth Getting back to the general subject of mass media and also narrowcast media, we have all kinds of sources of information available to us now, Facebook, Twitter, news feeds, talk radio, e-mail. We have all these channels of information, and it's really super hard for any individual to really cut through all that clutter and get accurate information, even if some of those channels of information are high quality. I think that actually these cognitive biases make it super tough for citizens to really get by in what should be a golden age of information. ... Another big challenge that not any one person can address is figuring out how to create a media ecosystem in which right information is more likely to get into people's heads. For example, it's actually not a bad thing, for instance, for media organizations to bring on people of opposing viewpoints, because bringing in people of opposing viewpoints actually causes ideas to get examined critically, and it's possible to have everything in one neat package where you get to see it all at once. Just to give you an example, it actually gives people a direct comparison between Clinton and Trump to see them next to each other. Debates have an important function where you see a direct interaction between these two very different candidates. Anyway, getting back to getting to the truth of things, I think that a major challenge for media is how to get information across accurately to people and have it stick. Finding the development source of coordinated thought In the lab, we're super excited about some stuff we're doing now trying to understand how cognitive and social abilities arise in the brain. For the last 15 years at Princeton, I have been interested in how the brain changes in response to experience, so how learning happens and how development happens. For a lot of that time, I've been interested in sort of the nuts and bolts of how single connections work. We use optical methods and advanced molecular biological methods to watch brain activity in action. We can do things like use optical methods to watch a brain circuit in the brain of a mouse as the mouse is navigating a virtual maze or as the mouse is investigating some puzzle that it has to solve, like a simple maze. A lot of what we've been doing is just understanding how the brain integrates information to learn from its environment. That involves multiple brain systems, and I study a brain region called the cerebellum, which, if you look in textbooks, it's usually a brain region that's important for balance or movement, but the way it does that is by integrating sensory information to try to keep mental processes on track. In the last few years, we've become very interested in the possibility that the cerebellum controls not only fine actions in adult life, but it might even act to shape the growing brain. One piece of information we have that suggests that is some clinical evidence that others have noticed, which is that if babies by accident have some injury to the cerebellum at birth, so if there's a difficult birth like a bleed, then the odds of autism go up by a factor of 40. It's bigger than the cancer risk that comes from smoking. What we suspect, based on that, is that maybe the cerebellum is some kind of guide that plays an absolutely necessary function for babies to grow their mental capacities. What we think is just as the cerebellum guides coordinated movement, what we think is that maybe the cerebellum acts to guide the development of coordinated thought, where babies learn to recognize faces and voices and pick up language. There are all these incredible things that babies do. We're really deeply immersed in testing the idea that this is a part of the brain that helps teach the rest of the brain to come online.
The O'Reilly Radar Podcast: Navigating the increasing globalization of industry and commerce.In this episode of the Radar Podcast, I chat with John Bassett III, chairman of the board of the Vaughan-Bassett Furniture Company. We talk about globalization and the effect it's had on the furniture industry, the international trade battle he waged (which was written about by Beth Macy in her book Factory Man), Bassett's book Making it in America, and what entrepreneurs need to know to succeed in business today.Here are some highlights: Panic sets in They started making furniture in China, and we competed very well through the 1990s. All of this changed dramatically in 2001 when they became a member of the World Trade Organization (WTO). Once they were in the WTO, their prices plummeted, and the bottom dropped out of the market. By that time, I had left the Bassett Industries and joined my wife's family furniture company called Vaughan-Bassett Furniture on January 1, 1983. I was over here at the time, but the whole furniture industry in wood was affected. Factories were closing left and right. Thousands of people were being laid off. There was panic. That's the only way to explain it. Then we found out that there was a rule at the WTO and a law on the United States books. The law actually goes back to the 1930s, it's called dumping. Dumping is when you sell a product in another country for less than your manufactured cost, and what you're doing is dumping your product in that country to force everybody out of business so you can capture all of the business. That's exactly what happened. I led a coalition that challenged the Chinese, and at that time, it was the largest dumping petition brought against the government of China ever at the WTO. Going to battle Prices were plummeting, and many of the United States manufacturers at the time said, "Well, we'll close our factories and just buy this product overseas. That's what we'll do." I wanted to go and actually look the people in the eye. I wanted to see exactly what was going on. I went to China and I went to Northern China. The prices seemed to be less up there than anywhere else. I met this gentleman who was erecting a huge, well, a series of factories with obvious Chinese government help. I told him, 'I might be interested in buying your product.' He looked me in the eye and said, 'This is what you must do.' I said, 'All right.' He said, 'The first thing is you must close every factory you have. You must get rid of all of your people. You must sell all of your machinery and you must put yourself in my hands.' There was no smile on the face. He was extremely serious. As we left to fly back to the United States, I told my son Wyatt, 'Get ready. We're going to war. They are being supported by the Chinese government. They are picking up the bill. This is not what we were promised when the Commerce Department asked us to support GATT. They're dumping, and either we're going to have to resist this or this industry will disappear.' Your power is in your people I knew then the rules of the game had changed. ... We just had to adjust the way we ran our businesses. Everybody talks about innovation, education, entrepreneurship, all of that, and I agree with all of it. We did something different. I wrote a book after Beth [Macy] wrote her book, Factory Man. I wrote a book called Making It In America. We organized our people and our organizations. Before we shut everything down, and we did close some factories, but we went to our people and we said, 'If we're going to survive, we've got to do this together.' The book is about how we organized our people. The people in these plants wanted to be a part of this. They did not want some CFO looking at figures and closing the plant. They said, 'We can make a better product. We can make a less expensive product, and we can deliver it faster and we can do all these other things.' The American worker is an exceedingly efficient worker, but you have to give them a chance. Playing by the rules My position is this. There are rules of the game out there for everybody in the WTO, including the Chinese and the Indians and others. Let's play by the rules. Donald Trump talks about new laws. We don't need new laws. We need to enforce the laws that we've already pledged to do. Let me give you an example. In the anti-dumping law, I went back to when we started our petition, which was 2003 through 2015. I took three countries: China and India, the two that certainly have the largest population and probably the most to gain, then I took the United States, which probably has the most to defend, being the largest market. I looked at how many dumping petitions have been imposed, not initiated but actually imposed, by these countries against other countries over that 11- or 12-year period. India leads the list. They imposed 353 anti-dumping petitions against other members of the WTO. Number two was China at 166. Number three was the United States at 163. The country that had the most to defend imposed the least. ... I think there are many benefits to globalization, but when countries cheat, they should be called to task for it. How to make it in America I would offer new entrepreneurs several pieces of advice. Number one is, if you're going to play on this ball field and if you're going to play in this game, be sure you're adequately capitalized. A lot of the people that you're going to compete against have staying power, so be sure you have enough capital to take on whoever your adversaries are going to be. Number two is, don't overlook the power of your people working for you. And, obey five of the 12 rules [I outline in my book], what I call the Five Great Rules: number one, attitude. You have to start with an attitude of 'we're going to win.' Don't start as a loser. Two: leadership. Don't ask anybody to do something you won't do yourself. Roll up your sleeves and go to work with your people. Three: Change. When you start out, be willing to change because things move so fast today, what you do today might not be relevant six months from now. It might not be relevant six weeks from now. Number four is, don't panic. They love to panic you and tell you you can't do it. The easiest battle to win is when the other side surrenders before the first shot is fired. Just calm down. There's never been a good business decision made when people were panicking. Last is teamwork and communications. Everybody in your organization has to be on board, and the way you get them on board is through communication. Constantly tell people where you are and ask for their help. Those would be the things I would tell a young entrepreneur to do.
The O'Reilly Radar Podcast: perceptual robotics, post-evolutionary humans, and designing our future with intent.In this Radar Podcast episode, I chat with Haakon Faste, a design educator and innovation consultant. We talk about his interesting career path, including his perceptual robotics work, his teaching approaches, and his mission with the Ralf A. Faste Foundation. We also talk about navigating our way to a "post-human" world and the importance of designing to make the world a more human-centered place.Here are a few highlights: Multimodal interface systems What these robotics systems allow you to do, which is really exciting, is you can take someone who's an expert at a certain scale, maybe they're an athlete, and you can put them into a robotic system and ask them to perform their craft, if you will. You can imagine taking an expert, someone like Tiger Woods who's a fantastic golfer, having him climb into a robot suit and show his perfect golf swing, record it, and then have novices climb into the robot suit, hit play, and sort of play back the expert's body knowledge into the novice's body. This is the notion of a multimodal interface. You can couple all of the modalities of sensing: the visual sensation of being in a situation, the sound effects, and then haptic feedback, whether that's force feedback on your gross body movement or specifically simulating what they call pseudo-haptics, or the sensation of touch. You can put little vibrating motors all over your body and make a responsive suit. We were interested in studying what happens if you put someone in one of these systems and show them visually how it should be, and then ask them to perform, or you play it back into their body and ask them to perform it. You can study how quickly people learn how to do those skills. This is a really powerful set of technologies for things like post-stroke rehabilitation, areas where people have lost their ability to use their body, and other kinds of situations. You can imagine if you try to teach a robot how to walk or how to perform a skill, it's very important that you be able to capture the skill in the first place so that the machine can learn from you. Designing intuitive experiences First, we spend a lot of time learning how to use our body, and then we move into a kind of visual stage, and then during adolescence we learn to deal with our emotions, and finally learn skills of higher order, reasoning, and critical thinking, and symbolic thinking, and so forth. Typically, what works well from a perceptual standpoint is just to recognize that a lot of the things that we consider to be expertise in our use of technology are thinking about it from this adult symbolic perspective. We presume that if we tell someone something with words, they will do what we tell them to, whereas in actuality, they're going to respond very automatically from an emotional level and even at a lower-level embodied experience of the world. From a perceptual perspective, those systems are deep in our control system as humans, and they're very reflexive and automatic, and that's the source of our intuition. So, when we're trying to design experiences that make things, say, more intuitive for someone, it's really important that you leverage the aesthetic and feeling-based and emotional aspects of an experience because it more immediately connects with what is intuitive to them. Of course, you never know how an experience will be used until you observe people using it, and a lot of times your hunches are quite wrong, which is why designers use methodologies around rapid prototyping and iterative design to get stuff quickly into the world without presuming that we know what's going to work. Giving our tools the capacity to shape our future Humans have evolved to have a certain set of capacities when we interact with the world. Today, we live in an experience that has all of these new technologies that are really shaping the way that we think and act. We have computers, and we have the Internet, mobile devices, augmented reality, and other things that fundamentally alter our sense of what a human is—so, an organ transplant, or a drug that is designed to have some kind of emotional effect, or genetic engineering. These are capacities that are fundamentally changing the biological nature of what humans are. The post humanist theorists called this a kind of trans-human state or transitional human, and what we're moving toward is this kind of hypothetical—it's hard to pin down what the future will be, of course, and we don't want to be overly deterministic—but I think we can say pretty confidently that we're going to have capacities that radically exceed those of our present biological capability. The theory goes that then we sort of transcend the unambiguous nature of what it means to be human and we become something else. When you mix into that super intelligence or autonomous robotics or life extension—the ability to live forever, that you could encode your mind and upload it into the cloud, or design you own children, or simulate possible variants of yourself, try a variety of different medical treatments and then pick the one which survives—we're moving out of a state of humans as designed by nature, evolution, through this contemporary trans-human state of a world that's designed by humans because we design our media and our technology and so forth into a state where we are no longer human because we're sort of post evolutionary. We've given our tools the capacity to shape our own future. Designing the future with intent You have to recognize as a designer you are always doing what you're doing in the service of power, and by doing that project you're perpetuating the values of whatever it is that's driving that broader intent. As a designer in that system, you need to be very cognizant of your own values, what you are and are not willing to do. You need to push back when you feel like there needs to be pushback. We live in a very subjective world, and it's incredibly complicated, and everything is double sided. You need to be comfortable for yourself about what you're doing, but I really think it's important that you have a much bigger sense of what the world needs and that you are working toward those things that the world needs. Because we're entering a world, hopefully, that's increasingly democratic and distributed, and that values diversity, and values different perspectives, and creates services and all of these different little niches that benefit people in all kinds of nuanced, magical ways. It's very important that you keep those systems open and that you have a strong point of view about the kind of future you want because it would be so easy for very powerful lobbies, politically or technologically if you will, to own all of the data or all of the thinking and have everyone else kind of follow like sheep.
The O'Reilly Radar Podcast: Bot hype, bot UX, and bots in the workplace.This week on the Radar Podcast, we're featuring the first episode of the newly launched O'Reilly Bots Podcast, which you can find on Stitcher, iTunes, SoundCloud and RSS. O'Reilly's Jon Bruner is joined by Pete Skomoroch, the co-founder and CEO of Skipflag, to talk about bots—about what's driving the sudden interest, what we can expect from the technology, and some interesting emerging applications.Here are some highlights: The uncanny bot valley I've seen a lot of hype waves over the years in tech, but this one is growing pretty rapidly. That exact story I've heard from a few people, where CIOs from big companies are actually saying, 'All right, what's our bot strategy? I want to stop, I want to retask some people to dig into this.' There's been a lot of things like that in the past, where it could feel misguided because, 'Wait, it's too early—we don't even know what this is yet.' At the same time, there's usually something behind these things. Another recent analogy was Minority Report, right? If you go back to 2002 when that movie came out, the boardrooms were echoing with, 'I want an interface like that! I want to talk to a computer with my hands and wave them around.' Now, maybe a little bit of what we're seeing is like the movie Her, which came out in 2013. ... It's kind of eerily close to where we are, it feels like, but there is that uncanny valley between what you see in the movie and where the AI tech is right now. I think that's why it feels a little bit like hype—most people don't grasp the difference. 1,000 bots versus one god bot Benedict Evans at Andreessen Horowitz has been writing a bunch on both the rise of messaging over the last four or five years, and now he's talking a lot more about conversational commerce and UX and bots. I really liked one quote he had, which was, 'What can I ask if I can't ask anything?' This is a different kind of discovery, right? Before, we were talking about discovery of apps, discovery of bots or products. There is a deeper problem, which is, when I'm in a conversation with a new bot, if the interface for every bot is kind of the same, it's some text interface, it's unclear exactly who I'm talking to and what they know and what they don't know and what I can ask. If it has some knowledge inside the bot's memory, it's unclear what it knows and what it doesn't know. That's where I think Amazon Alexa—they're walking a line, but I think part of the reason it's clicking with some consumers better than previous attempts at these things is, my understanding is, they spent thousands and thousands of hours with actual voice actors in a room asking it a lot of different questions, and then, kind of brute force training it to respond well and be resilient to these kinds of requests. Now, that's not a realistic solution for most other bots, and I think part of the solution here is going to be either better UX in these messenger platforms, so that you could have a more clear sense of the options and of the menus, if you are texting. Then another thing is being very clear about what the bot is good for and what it isn't. This is more like 1,000 bots versus one god bot. Overcoming the brittleness issues of the semantic web If you go back to the semantic web days, the vision was that you'd have this machine-understandable interface so that machines could talk to machines, and all these queries, like booking a flight, would magically happen. The vision that everybody really wanted was—Apple had this vision of the Knowledge Navigator. We're actually, I think, not that far off from that demo these days, but it's kind of a walled garden demo, where you could build that for that specific case, but to enable almost any generic application, what you really need is a fuzzy way for APIs to talk to APIs with some reasoning and intelligence. I don't know if this bot wave is going to stick or if your bot strategy is going to really matter at the end of the day, but I'm actually optimistic that machine learning is going to keep cranking away. Text is here to stay; it's a nice way to talk to people in public without everybody talking over each other. What is interesting is we're training machines now to talk via text. Now, what happens when you have a machine talk to another machine via text? Do we get over some of those brittleness issues that killed things like the semantic web? Bots at work I'm pretty bullish on the idea of AI in the workplace. That's why I'm pretty excited about the Slack platform. They were one of the early movers. Once they called the apps that you could build on Slack 'bots,' I think that's really where you saw a step function in the number of bots, because by definition, if you're building an app on Slack, it's a bot. Now, Facebook has followed suit, and everything there is a bot as well. I think you're going to see this split between e-commerce applications, and then in the workplace, I'm sure a lot of the big workplace players will have some form of bot platform or bot interaction.
In this O’Reilly Radar Podcast: The impact of minimal IoT product security and the case for new pro-security business models.This week's Radar Podcast episode is a special cross-over edition from the O'Reilly Security Podcast, which you can find on iTunes, Stitcher, RSS, or SoundCloud. O'Reilly strategic content director Courtney Nash chats with Cory Doctorow, a journalist, activist and science fiction writer. They talk about nascent pro-security industries, the EFF's lawsuit against the U.S. government, and the new W3C DRM specification.Here are some highlights: Auditing IoT products is a liability for security researchers Think about the conditions under which IoT companies operate. Their business plan—the thing they show to VCs to get the money to go into the business—is to monetize data. They're all designed with security as an afterthought. They're all designed with the minimum viable security to make this product not immediately burst into flames after you put it inside your body or put your body inside of it. Even worse, security researchers face total, brutal liability for investigating these devices and telling people which ones are and aren't safe. It is completely nightmarish. New pro-security business models Note: The Electronic Frontier Foundation is representing Bunny Huang and Matthew Green in a case challenging the constitutionality of Section 1201 of the DMCA. One of the things that our DMCA lawsuit would provide for is a pro-security business model. Imagine if you could start a commercial consultancy that would come in and deworm your IoT household. It could come in and jailbreak all the devices and check their firmware loads, and replace the firmware loads with open firmware or patched firmware, or something else that sits in between. All of those things, all that commercial stuff as well, is currently off-limits, and would be available in the same way that you can enable third-party parts and services if there are no legal impediments. The hardware service and support market in the U.S. for all classes of goods, from lawnmowers to cars to air conditioners to computers, is 2 to 4% of America's GDP. It's a gigantic multi-billion-dollar sector, and in many cases, these are small and medium-size enterprises. Related resources: The EFF is suing the US government to invalidate the DMCA's DRM provisions (BoingBoing) America's broken digital copyright law is about to be challenged in court (The Guardian) The 1201 complaint in full
The O'Reilly Radar Podcast: Natural language understanding and natural language processing applications, our future with chatbots, and open source indexing.This week, I talk with Alyona Medelyan, co-founder and CEO at Thematic and founder and CEO at Entopix. We talk about natural language understanding, the challenges of analyzing unstructured text, and her open source indexing tool Maui that she's been working on for the past 10 years.Here are some highlights: Use cases of Natural Language Understanding Natural Language Understanding is really a sub area of Natural Language Processing (NLP). In general, NLP deals with using computers to understand human language, but not all NLP tasks require actual understanding. For example, if we take part of speech tagging, when an algorithm decides whether a word is a noun or an adjective or a verb, in order for the the algorithm to perform this accurately, we don't really need to know what the words mean. You can achieve quite a lot by simply counting how many times part of speech text follow each other, and very simple techniques would be sufficient. On the other hand, if we're building a dialogue agent, a chat bot like Siri for example, in order to respond meaningfully, Siri would need to understand what each of our statements mean, and this is where the understanding comes in. Practical applications of NLU for enterprise A lot of what can be done with NLU is very practical. I'm actually in Portugal at the moment, and I don't know any Portuguese. Every time I go to a restaurant or buy groceries or search for places, I use Google Translate, so it's quite practical. In terms of what everyday businesses, not just giants like Google and Apple, can do with NLU, I think the key example would be understanding customer feedback because these days, pretty much everybody has a smart phone. Everybody has written review for a company if they like their services or they didn't. People will send complaints and so on. With all of this text, businesses become more competitive because they know people can read all these data. Sentiment analysis—one of the techniques that uses natural language understanding to not just understand whether the customer is happy or sad, but also what are the specific things they're saying the business is good at or which ones they can improve—this can practically help them to compete and get better at their offerings. Maui: More than a digital librarian In a traditional library, a librarian categorizes books so that people can find them. In a digital library, Maui takes this role identifying what each book or each document is about. This is what Maui does; its results can be used to improve search and organize documents, but that's just one of the applications. I also helped companies apply Maui in many interesting ways. One company used it to link advertisers to web pages to display content-relevant ads. Another used it to send users content recommendations. How it differs from Thematic, is Thematic is specially designed to analyze short pieces of text, something that Maui doesn't do well. Maui works great on written documents where people actually thought about how to write them, and Thematic works better on short text and can detect more fluid themes than Maui. Our future with chatbots I think that chatbots and automated personal assistants, even though currently are not particularly well advanced in what they're doing and require a lot of humans helping, will still become more prevalent in the future. That would mean that we won't need to interact with people as often. Just like online banking made the cost of making transactions cheaper, customer support will become cheaper, too, thanks to chatbots. On the other hand, businesses will compete on providing the best deals and the best customer service for their customers. I think they will use more and more natural language understanding to figure out what people say about their business, about the competitors, about the products. In the end, we as customers will be the one who will benefit from all of this.
The O'Reilly Radar Podcast: Eleanor Saitta on security countermeasures at the human level, the relationship between security and design, and understanding security design as a separate discipline.This week's episode features a special cross-over conversation from the O'Reilly Security Podcast, which you can find on Stitcher, iTunes, SoundCloud, or RSS. O'Reilly's Courtney Nash chats with Eleanor Saitta, a security architect at Etsy. They talk about the importance of thinking of security in a human context and the increasingly critical relationship between security and design.Here are a few highlights: Detecting fraudulant patterns at the human level Look at banking fraud and fraud detection systems. Although financial malware is a real issue, and we are seeing more and more people who end up with malware running on their phones that then attacks bank authenticators or logs into their account and makes transfers. These are starting to be very real issues, let alone credit card numbers and all this kind of stuff. The biggest way that those attacks are stopped isn't by preventing code from running on people's machines, it's by detecting fraudulent patterns and transfers at the human level, and cutting things out at business rule levels, and much higher levels. In the worst case, it's someone goes into a bank physically and talks to someone, and has a conversation. That's just as much a part of the security countermeasure set as any number of anti-banking Trojan, anti-malware projects are. The relationship between security and design That whole process of coming into understanding the high risk world a little bit more was really, in some ways, it was really challenging for me because I'd spent probably eight years, nine years at that point when I first started getting involved in that community, doing big enterprise security. To come into this community and to realize that actually I know very little about how to create better security outcomes for human beings was an interesting thing to learn midway through my career. What it made me do was go back and think a lot about the relationship between security and design, and realize that one of the things that we need to do when we're building systems for, at the time, I was mostly thinking about high-risk people, but I've realized that this applies to any system. We need to understand not just what that user is worried about, but what the countermeasures that they can use to cancel out their adversaries attacks are, because we're dealing with that design space much more than we are with the code space. Now, if we can find things at the code level that give us new capabilities in that design space, that's amazing. So, being able to get rid of classes of low-level bugs, so we can stop thinking about them—great, that's a huge capability for the design space and the architecture space. All of the different things that we can do with cryptography, as far as using it to reduce the kinds of attacks that people can be subject to and giving them new invariants the system can let them use. Great, amazing capabilities, but the reason why they're interesting is because of how they shift that design space, and that has to be the thing that starts driving everything. Security design as a separate discipline There's a conversation between architecture and requirements and design. There has to be. None of these can act independently, but the thing that we don't see, the thing that I really don't see in the security community yet, is an understanding of security design as really a separate discipline. This is literally what I'm spending my time doing right now.
The O'Reilly Radar Podcast: Color Genomics, genetic testing access, and the future of precision medicine.This week, I chat with Othman Laraki, co-founder of Color Genomics. We chat about challenges and opportunities in genetic testing, the future of precision medicine, and the hurdles medicine and health care are currently facing (and how we can overcome them).Here are some highlights: Genetics testing for everyone Genetics, we felt, had come to a point where there was an opportunity to have a very big impact by essentially mixing some of the best of the biology world with software—in many ways, genetics had started to become, in part, a software problem. It felt like it was starting to be possible to build products that made genetics accessible to a much broader population by both dropping costs as well as increasing access, so making this information more accessible to a much broader population in a scalable way. ... For example, one of the things we did that we're very proud of is we created this program called the Every Woman Program, where whenever someone buys a test from Color, they can also contribute to fund testing for someone who can't afford it. Then we work with a number of cancer centers, for example at UCSF and the University of Washington, Morehouse in Georgia, and a number of others, where each one of those centers works with underprivileged populations, and they can provide tests for free for people who can't afford it but who the doctors think should get tested. Opportunities in machine learning One of the big opportunities for machine learning in genetics, for example, is around the interpretation of the effects of specific genetic changes. Right now, there are set of guidelines or processes that are used by the industry around the interpretation of how a specific mutation impacts a gene. It's a structured process that's very labor intensive, but it's one of those areas where over time is going to become something that's very heavily solved by machine learning because there's a lot of data that can be used to train a model instead of purely running it in a manual way. The industry is going to evolve quite a bit over the next few years and machine learning is going to have very substantial impact there. Using the full data set of the human body Each one of us is carrying and generating a tremendous amount of data in our daily lives, whether it's our genome, our microbiome, etc., etc. So far, the link between that data and health practice had been through the path of research and translation to a few proxies, essentially, where researchers collect a lot of data, they do a research study, it turns into a set of conclusions, and that over time gets turned into a few rules that get introduced into medical practice. If someone's lipid levels are at this level, etc., then you draw these kinds of conclusions. Now, we're coming to the point where the amount of data that a doctor will be able to use in a real way to make medical decisions is going to be the full data set of our bodies, which is very exciting and can have a very big impact. Long-tail distribution of genetic insights In some ways, I feel right now we've come to this point where there's been enough data and science behind us that we can already create a lot of value, and that allows the bootstrapping of doing things at a massive scale that really takes us to that long-tail distribution of insights around how genetics work and how the body works.
The O'Reilly Radar Podcast: Conversations with Daniele Quercia and Frank Cuypers.This week's episode features two conversations I've had recently centered around smart cities. First, I chat with Daniele Quercia, research team lead at Bell Labs. We talk about research he's working on now; the launch of goodcitylife.org (including smelly maps and happy maps); why our use of technology shouldn't just aim to make a city smart, but to improve the day-to-day quality of life of it's citizens; and about the emerging areas of urban informatics he's finding most compelling. In our second segment, I chat with Frank Cuypers, associate professor at the University of Antwerp and strategist at Destination Think! We chat about the importance of urban DNA, his nonprofit project Why Your City, and why there's no such thing as a smart city.Here are some highlights: Alternative smart city agendas Daniele Quercia: "The idea and the rhetoric behind Smart City is one of efficiency and security. Usually they say a smart city is a city in which, if you go to work, you're always going to be on time. If you go shopping, there is no queue. You know what? You're going to feel really, really safe because of the CCTV cameras around you. It's about efficiency, security. We all know that we don't choose a city because of just efficiency and security. They make a city acceptable, but they don't make a city great. What makes a city great are fuzzy concepts, concepts that are really difficult to quantify, like beauty, happiness, these sort of things. That's why we built goodcitylife.org, which is a global network of people—researchers, people in industry, who really think there is an alternative agenda to the smart city agenda, and that's what we want to do. We want to empower these people, and we want to also do research in this new area of simply giving a good life to people." Daniele Quercia: "Currently, I'm thinking about something related to algorithmic regulation. We wrote a paper that we presented a few weeks ago at the conference Dub Dub Dub Dub. The idea was that, basically, these platforms like Airbnb or Uber generate data. These data could be used for regulation. What we found out is that, for example, we look at the evolution of all Airbnb in London, and we look at which areas were affected. For those areas, we had census data. The idea was that now you can see how the evolution of Airbnb is related to different social economic conditions. Then you can see that certain areas, there might be some dodgy subletting going on, and you can regulate that because you can build an index after the data. The same thing, in general, that you can do with any platform, right? You generate data inside a city and then you take that data to build analytics that might be useful for policy-making? In theory. You can change your policies and then you can see the impact of those policies in the city, and so on." "Cities are for humankind what the telephone booth is for Clark Kent" Frank Cuypers: "Of course, I'm a fan of smart cities. The thing is that ... Let's talk about data. Seth Godin wrote, 'Data gets us the Kardashian's.' People become lazy. There are two many stakeholders that are sharing data, data, data, and they don't have a clue what they are talking about. I have never met a politician who knows the difference between long data, big data, and short data. What is worrisome is that we're building now sort of vendor-lock in places. For instance, New York. The council writes a letter to Google to ask: in Google Maps, it's always that with your car, you go to the left. That causes a lot of traffic jams. If 30% in their maps could go to the right, it would solve the problem. They didn't answer, and I don't know whether they have answered already, but that's not the point. Is it normal that you have to ask a private company to solve a traffic problem in a public space? "What we need actually is—Jane Jacobs again, grassroots activism online. I say there is no such thing as a smart city because I really believe that cities are for humankind what the telephone booth is for Clark Kent. He goes in and in a split of a second, he transforms into Superman. We know that cities are the places where we could always transform our technology. It's about technological disruption as well, and that's a very good thing. But bring these two big trends together—on the one hand, you have the GNR, the genetics, the nanotechnology, the robotics, information technology. On the other hand, urbanization. 80% will live in cities in 2060. This is the most important thing we have to fix in our lives." Frank Cuypers:"Now, I'm very interested in people who go to the basic question, what is the economic foundation of our city? Someone who I really appreciate, and he's not so known in the Anglo-Saxon world but he's advising the Vatican and the State of Ecuador, is Michel Bauwens with his peer-to peer movement, where you really, really have a sharing economy. Uber and Facebook are in my world, but in my vision, they are not part of a sharing economy. They create a win-win situation: there is something free for you and there is some money to earn with your data. In the end, there is someone who always pays the bill. "It's the same with tourism. Sometimes we create a win-win situation, but the environment is destroyed, like the Great Barrier Reef in Australia. We need to create a win-win-win situation, without any party losing anything. That kind of disruption in economy and in politics I think is very necessary in these days because we can't keep pace with what happens in technology. We can't keep pace with what happens in information technology."
loading
Comments 
loading
Download from Google Play
Download from App Store