Gerd Gigerenzer on How to Stay Smart in a Smart World

Date:

Share post:


0:37

Intro. [Recording date: July 8, 2022.]

Russ Roberts: Today is July 8th, 2022. My guest is Gerd Gigerenzer. Gerd was last here in December of 2019, talking about his book Gut Feelings. His newest book is our topic for today: How to Stay Smart in a Smart World: Why Human Intelligence Still Beats Algorithms. Gerd, welcome back to EconTalk.

Gerd Gigerenzer: I’m glad to be back and to talk to you again.

Russ Roberts: My pleasure.

1:03

Russ Roberts: You write a lot about artificial intelligence and you say at one point that AI–artificial intelligence–lacks common sense. Explain.

Gerd Gigerenzer: Yeah. Common sense has been underestimated in psychology, in philosophy, always[?] else. It’s a great contribution of AI to realize how difficult a common sense is to be modeled.

So, what that means is that, for instance, AlphaZero can beat every human in chess and Go, but it doesn’t know that there is a game that’s called chess or Go. A deep neural network, in order to learn, to distinguish pictures of, say school buses, from other objects on the street needs 10,000 pictures of school buses in order to learn that.

If you have a four-year-old and point to a school bus, you may have to point another time, and then the kid has gotten it. It has a concept of a school bus.

So, what I’m saying: artificial intelligence, as in deep neural networks, has a very different kind of intelligence that does not resemble, much, human intelligence. Basically, to understand that deep neural networks are statistical machines that can do very powerful look for correlations. That’s not the greatest ability of the human mind. We are strong in causal stories. We invent, we are looking for.

A little child just asks, ‘Why? Why? Why? Why do I have to eat broccoli? Why are the neighbors so much richer than we?’ It wants casual stories.

Another aspect of human intelligence is intuitive psychology. How can a deep neural network know about these things?

And, finally, there’s intuitive physics. Already, children understand that an object that disappears behind a screen is not gone. How does a neural network know that? It’s very difficult. It’s a big challenge to get common sense into neural networks.

3:50

Russ Roberts: So, a big issue in computer science–we’ve talked about it many times on this program over the years–is that: Is the brain a computer? Is the computer a brain? They both have electricity. They both have on/off switches.

There is a tendency in human thought, which is utterly fascinating and I think underappreciated, that we tend to use whatever is the most advanced technology as our model for how the brain works. It used to be a clock. It was other things in the past. Now, of course, it’s a computer. And, there is a presumption that when a computer learns to recognize the school bus, it’s mimicking the brain. But, as you point out, it’s not mimicking the brain.

Russ Roberts: But, there may be some things that we call artificial intelligence that are brain-like and others that are not. What are your thoughts on the limits of that process? There’s a lot of nirvana, utopian thinking about what computers will be capable of in the coming years. Are you skeptical of those promises?

Gerd Gigerenzer: There’s certainly a lot of marketing hype out there. When IBM [International Business Machines] had this great success with Watson in the game Jeopardy!, I think I was amazed. Everyone was amazed. But, it’s a game–again, a well-defined structure. And even the rules of Jeopardy! had to be adopted[adapted?] to the capabilities of Watson. Then, the CEO [Chief Executive Officer], Ginni Rometty, announced, ‘Now, it’s the Moonshot.’ We are not going to the moon, but to healthcare, not because Watson knew anything about healthcare; because there was the money. And then, naive heads of clinics bought the advice of Watson.

Watson Oncology was the first thing for cancer treatment, only to find out that some of the recommendations were dangerously deadly. And then, IBM clarified that Watson is at the level of a first-year medical student.

Here we have an example of a general principle: If the world is stable, like a game, then algorithms will most likely beat us, perform much better. But, if it’s lots of uncertainty as in cancer treatment or investment, then you need to be very cautious. The claims are probably overstated–in that case, by the PR [Public Relations] Department of, yeah, of IBM.

Russ Roberts: But, isn’t the hope that, ‘Okay, Watson today is a first-year medical student, but give it enough data, it’ll become a second-year medical student. And in a few years, it’ll be the best doctor in the world.’ And we can all go to it for diagnosis. We’ll just do a body scan, or our smart watch will tell Watson something about our heartbeat, etc. It will be able to do anything better than any doctor. And you won’t have to wait in line, because it can do this instantly.

Gerd Gigerenzer: That’s rhetoric. If you read Harari, or many other prophets of AI, that’s what they preach.

Now, I have studied psychology and statistics, and I know what a statistical machine can do.

A deep neural network is about correlations and it’s a powerful version of a non-linear multiple regression, or a discriminant analysis. Nobody has ever talked about multiple regressions as intelligence. They can do something else. We should not let us bluff away into the story of super-intelligence.

So, what the real prospect is, deep neural networks can do something that we cannot do. And we can do something that they cannot do.

We should, if we want to invest into better AI, smarter AI, we also should invest in smarter people. That’s what we really need.

So, smarter doctors, more experts who can tell the difference and not wasting lots of money on projects like IBM’s oncology that don’t work, or IBM also had Watson on solely to bankers for investment. If Watson could invest–would be th4 great investor, then IBM wouldn’t be in financial troubles it is.

8:54

Russ Roberts: What I love about that insight is focusing on what distinguishes where artificial intelligence, or at least computers at this stage, can be extremely powerful versus not. And that’s stable.

There’s a more general principle–and I think it’s in your book, but certainly, it’s in your other book or in other people’s books, which is–fundamentally, when we’re looking at correlations in big data, we’re presuming that the past will tell us what the future will be like. And sometimes, it can: because it’s stable.

The environment is stable enough that whatever were the patterns that were revealed in the past, those patterns will persist in the future.

But in most human environments they don’t. And so, the promise of big data is–I like what, well, two things.

Former, excuse me, past EconTalk guest, Ed Leamer, likes to say, ‘We are storytelling, pattern-seeking animals.’

And, we are good at patterns and causation and sometimes they’re correct, but the computer doesn’t have any common sense to examine whether acorrelation is just a correlation or a causation.

Gerd Gigerenzer: So, the general point is–so, I’ve been studying simple heuristics that make us smart. And, simple heuristics, it’s like, you probably know the story of Harry Markowitz who got his Nobel Prize for an optimization model that tells you how to diversify your money into end assets.

But, when he himself invested his own money, for the time after retirement, he used his Nobel Prize winning optimization method? No, he didn’t. He used a simple heuristic.

A heuristic is a rule of thumb, and that’s called: invest equally. It’s called one over N. N is the number of assets or options. If you have two, 50/50. If you have three, a third/a third. That’s a heuristic.

And, in a world–which is called in decision theory of calculable risk that’s of stable world, yeah?–that would be stupid.

But, in the real world of finance, studies have shown it often outperforms Markowitz optimization, including modern Bayesian methods of that.

The general lesson is: There’s a difference between stable worlds and uncertainty, unstable worlds.

And, particularly, if the future is not like the past, then Big Data doesn’t help you. And for a finance with Markowitz optimization, you need lots of data to estimate all these parameters. The heuristic, 1 over N, needs no data on the past. It’s the opposite of Big Data.

Russ Roberts: Well, except for the problem, you’ve got to figure out N. N doesn’t come–the number of assets is not given.

Gerd Gigerenzer: That’s true.

Russ Roberts: That’s another problem.

Gerd Gigerenzer: Yeah. But, that’s the same thing for Markowitz optimization.

Russ Roberts: Yeah. For sure. For sure.

12:16

Russ Roberts: Now, you are a strong and I think eloquent promoter of human abilities and a counterweight to the view that we’re going to be dominated by machines. They’re going to take over, because they’ll be able to do everything–everything, everything. So, we’re kind of remarkable: our brains are really amazing. At yet at the same time, there’s a paradox in your book, which is that you’re very worried about the ability of tech companies to use Big Data to manipulate us. How do you resolve that paradox?

Gerd Gigerenzer: So, the statement that you made before is just right on the point. So, it’s not about AI by itself. It’s about the people behind AI and their motives.

We usually talk about whether AI will be omniscient, or AI will be just an assistant tool; but we need to talk about those behind it. What’s the motive? So, that is what really worries me.

So, it’s certainly that we are in a situation today where a few tech companies, and mostly a few relatively young white males who are immensely rich, shape the emotions, the values, and also control the time of almost everyone else. And, that worries me.

You are a free market person. And I, also, am a person who tries to believe in people’s abilities. But we need to be aware that the opposite won’t happen.

So, and here, one thing that we might think about–how to improve the situation–is the following: Google gets 80% of the revenue from advertisement. Facebook, 97%. And, that makes the customer–the user–no longer the customer.

So, in the book, How to Stay Smart, I use an analogy, the free coffee house. Imagine in your hometown, there is a coffee house that offers free coffee. Soon, everyone–all the other coffee houses–will be bankrupt.

We all go there, enjoy our times, but in the tables, the tables are bugged, and on the wall are videos, which record everything we talk, and to whom we talk, when we do this, and that will be analyzed. And, in the coffee house, there are people–sales people–who interrupt us all the time in order to make us buy personalized products.

The sales people are the customers of this coffee house. We, who enjoy our coffee–we are the product being sold: precisely, our time, our attention.

So that’s roughly how the business model of Facebook and others functions.

And it also gives us an idea about a solution. Namely: Why don’t we want to have real coffee houses back, where we can pay rather than being the product?

16:14

Russ Roberts: The problem with that–and by the way, you know, I started off–I’m sure long-time listeners can go back to my earliest episodes on this topic, where I was extremely skeptical and less worried, not worried at all; to a little bit worried; to now, today, I’m somewhat worried.

And, listeners will recognize my metaphor of the repair person who comes to your house to fix your washing machine. Does it for free. But, while he is there, he takes a lot of photographs of what you bought and what’s on your shelves and says, ‘Oh, by the way, you don’t mind if I use these to sell to my friends? Because they want to know what you buy and what you’re interested in. What books are on your shelf and the receipt that you have here for this product you bought.’

And, there is something creepy about it. The creepy part about it for me is that most people don’t think about it. They don’t realize that there’s cameras in the coffee house. They don’t realize that the person, that everything they say is being recorded, and who they’re talking to, and what the topics are, and so on.

On the other hand, you could argue, and sometimes I argue like this, because it’s interesting and it may be true, ‘Okay. So those sales people interrupt my conversation every once in a while. They don’t literally shut me up. They just hold an ad next to my friend’s head and distract me from–you and I having this conversation in the coffee house.

And I find that somewhat annoying. But actually, it’s kind of useful, because sometimes it’s something I actually want, because they know a lot about me. And, the coffee is free.

So, you’re telling me, I need to go to this coffee house over here where I don’t get interrupted. Okay. That’s nice. But, the coffee is $5 a cup. What’s scary about it, to you?

Russ Roberts: I think there is stuff to be scared about. I’m playing a little bit of rhetoric here now. But I’m increasingly scared, so take a shot.

Gerd Gigerenzer: So, there are two kinds of personal information that need to be distinguished. The one is, like, collecting information about what books you buy and recommending you other books.

The other thing is collect all the information about you that one came including whether you’re depressed today, whether you are pregnant, whether you have had a heart failure or have cancer, and use that information to target you in the right moment with the right advertisement.

So, that’s the part that we do not need.

And also, in some countries–like, so I’m living in Berlin. And, East Germany had had the Stasi.

Russ Roberts: The secret police.

Gerd Gigerenzer: If the Stasi would’ve had these methods, they would be over-enthusiastic.

So, we see something similar in China and other countries.

And, the final point I want to make is, what people underestimate is how closely tech companies are interrelated with governments. So, they say, ‘Oh, oh, it doesn’t matter whether Zuckerberg knows what I’m doing, but the government doesn’t know.’ Uh-Uh. Snowden has, a few years ago, shown how close the connection in the United States is. In the United Kingdom, there’s Karma Police. And, many countries.

So, then, the–let me make another point. What would it cost us to get freedom and privacy back? So I made a little calculation.

If you take the Facebook–now Meta Corporation–and would reimburse–so, from reimburse Zuckerberg for his entire revenue, it would be about, per month, it would be about $2 per month. Per person. That’s all.

And those countries who can[?cannot?] afford $2, then we pay $4.

And that would solve the problem. So there is a solution for that, if you want to go there. The question is how we get there.

20:44

Russ Roberts: The reason–I’m a little bit skeptical of that. The reason is, is that I pay $5 a month for an app that is helping me with my Hebrew. I pay $5 a month for a lot of things, by the way. There’s a lot of Substack and Patreon accounts I pay $5 a month to. So, I have a lot of–$60 a year. Somebody has decided that 4.99 is pretty easy for people to swallow. And, by the way, when it says ‘$60 a year,’ sometimes I go, ‘Oh, that’s a lot,’ but ‘$4.99 a month, that’s nothing.’

Russ Roberts: Anyway, I have a bunch of those. And, you’re suggesting that Zuckerberg could have twice the money, twice the revenue he has now, if he would charge people $4 a month, instead of $2 a month.

Now, economics predicts that, in general, when you make people pay $4 for something that they used to get for free, you won’t have 2 billion users. You’re going to have fewer. But, as long as you still have a billion, you’re suggesting that going to $4 a month would have such an enormous effect on their user base that Zuckerberg won’t do it voluntarily: we’d have to impose something through regulation.

Gerd Gigerenzer: Yeah. That’s the problem, yeah, how to get there. And I see the problem. But I’m just saying that it wouldn’t be–in terms of the contribution of individuals to get their privacy back–it wouldn’t be much. It’s just a coffee.

Russ Roberts: But, most people don’t care.

Gerd Gigerenzer: Yeah. That’s the problem.

Russ Roberts: Would you argue–are you arguing that they should care? I think you’re arguing they should care, because they don’t realize–I think a lot of people don’t realize what they’re actually being surveilled about, how widespread it is. But, you’re also arguing that even if they knew–and I think many people do know, we’ll talk about that maybe in a minute–they go, ‘Eh, what’s the big deal? I get a lot of products that I’m interested in. It’s actually pretty good.’

They don’t realize, potentially, that the products that they see in their search engine aren’t really the ones that they want. It’s the ones that the real customers line up–and we recently had a conversation with the head of Neeva, about, who used to work for Google Ads.

So, let me try a different way to get at a better situation, see what you think. This is the way an economist might think about it. So, that coffee shop, the real problem is there’s only one free coffee shop and everybody is in that coffee shop.

So, when I’m on Twitter, I’ve got my followers. I’ve accumulated them over years. I go to the new coffee shop that competes with it: it’s empty. It’s very hard for a new coffee shop to start up, because what we’d like there to be–one way to think about how to improve the situation is–there’s a lot of coffee shops.

And one of the coffee shops–the coffee is free, so are the pastries, it’s fantastic quality. The problem is that they force you to give blood, when you come in. They do an MRI (Magnetic Resonance Imaging). They know everything–like you say, they know all your mental states. It’s very invasive.

But, there’s another coffee shop down the street where it’s not free. It might be a subscription model: Once you are in the coffee shop, you can have as much coffee as you like. There’s a third coffee shop, where it’s per by the cup, because some people don’t drink so much, it’s not worth it to pay the full subscription.

The problem is I can’t find my friends in those coffee shops.

What I would suggest, for those people you, and maybe me, who are worried about surveillance and government intervention against us, tyranny–let’s find a way that I can port my friends to a different coffee shop without having to start from scratch.

Possible?

Gerd Gigerenzer: I mean, humans have imagination and we could find a way to get there. It’s just, we also need people who want that.

And, as you hinted before, there’s the so-called Privacy Paradox, which is: that, in many countries, people say that their greatest concern about the digital life is that they don’t know where the data is going and what’s done with that.

If that’s the greatest concern, then you would expect that they would be willing to pay something. That’s the economic view. You pay something for that.

But, then, so in Germany–Germany is a good case. Because Germany, we had the East German Stasi. We had another history before that–the Nazis, who would have been–enjoyed about such a surveillance system.

And, so Germans would be a good candidate for a people who are worried about their privacy and would be willing to pay.

That’s what I thought.

So, I have done, now, three surveys since 2018, the last one this year. So, representative sample of all Germans over 18. And asked them the question: ‘How much would you be willing to pay for all social media if you could keep your data?’

So, we are talking about the data about whether you are depressed, whether you’re pregnant, and all those things that they really don’t need.

And, so: ‘How much are you willing to pay to get your privacy back?’

Seventy-five percent–75%–of Germans said ‘Nothing.’ Not a single Euro.

And, the others were willing to pay something, yah.

So, if you have that situation–where people say, ‘Oh, my greatest worry is about my data’; at the same time, ‘No, I’m not paying anything for that,’ then that’s called the Privacy Paradox. [More to come, 26:52]



Source link

spot_img

Related articles

California burglars crash vehicle into Beverly Hills Neiman Marcus

NEWYou can now listen to Fox News articles! A group of burglars drove a car through...

21 British Tweets That Kept Me Laughing This Week

"Obsessed with Darlington bidding to host Eurovision. That’s the kind of ambitious confidence I need to have...

MLB odds: How Fernando Tatis’ suspension affects Padres’ title odds

The San Diego Padres will be without superstar Fernando Tatis Jr. this season as the...
%d bloggers like this: