Dans cet épisode, l'animatrice Hilary Sutcliffe explore . la réglementation sous un autre angle. La plupart des réglementations et de la justice pénale reposent sur l'idée que la peur des conséquences, telles que les amendes, les sanctions et la prison, est le meilleur moyen de dissuader les entreprises et les individus de se comporter de manière répréhensible à l'avenir. Son invité cette semaine, Chris Hodges OBE, professeur émérite des systèmes judiciaires à l'université d'Oxford, juriste et ancien régulateur, explore les recherches approfondies qui montrent qu'en réalité ce n'est pas vrai et que cela a souvent l'effet inverse, en augmentant les risques de nouveaux comportements répréhensibles.
Il explique que notre meilleure compréhension de la nature humaine montre que l'apprentissage est plus important que la sanction et qu'une approche appelée "régulation coopérative basée sur les résultats" est très prometteuse pour une manière plus efficace d'atteindre les objectifs de la régulation avec de meilleurs résultats pour les individus, les entreprises et la société.
Hodges est l'auteur de Outcome-Based Cooperation : Dans les communautés, les entreprises, la réglementation et la résolution des conflits.
HILARY SUTCLIFFE: Hello and welcome to From Another Angle, a Carnegie Council podcast. I am Hilary Sutcliffe, and I am on the Board of Carnegie Council's Artificial Intelligence & Equality Initiative. In this series I get to talk to some of today's most innovative thinkers, who take familiar concepts like democracy, human nature, regulation, or even the way we think about ourselves, and show them to us from a quite different angle. What really excites me about these conversations is the way they challenge our fundamental assumptions. Their fresh thinking makes me—and I hope you too—see the world in a new way and opens up a whole raft of possibilities and ways of looking at the future.
Today I am delighted to welcome Chris Hodges, who is emeritus professor of justice systems at the University of Oxford and an expert in regulatory systems and dispute resolution. He has written many books on courts, ombudsmen, and regulatory systems, and has done extensive research into what makes good organizations and good regulations. He is the author of a new book: Outcome Based Cooperation: In Communities, Business, Regulation, and Dispute Resolution.
We are going to talk about regulation from another angle, and Chris is the perfect person to explore this with because in 2021 he was given the Order of the British Empire by the UK government for services to business and the law, including his work on regulation.
Chris, welcome, and thank you very much for being here with us.
CHRIS HODGES: It is a great pleasure. Thank you, Hilary.
HILARY SUTCLIFFE: When I first learned about your work I was researching trust and regulation, and I can honestly say that it was a real shocker to me to understand that this idea of instilling fear of consequences through punishment and sanctions, things like fines and even jail, doesn't actually work in deterring misbehavior. For me this is pretty much the basis of all regulation and criminal justice, so this is very intriguing. Please could you just tell us a little more about that?
CHRIS HODGES: Thank you, yes. In many ways, I had exactly the same response as you did because I was taught as a law student—as law students and classical economists are still taught—that enforcement of sanctions for breaches deters future noncompliance. I spent the first 25 years of my career as a practicing lawyer advising companies and dealing with prosecutions against them but working increasingly closely with governments and regulators.
When I became an academic I started to think about the theories underlying the various dispute resolution systems in private enforcement and regulatory systems in public enforcement. Throughout every stream of classical philosophy, legal rules, and economic theory you find the word "deterrence" cropping up and repeated again and again.
Because I am basically a legal scientist I initially thought, Frankly I do not believe the imposition of fines, for example, or putting someone in prison actually induces behavioral change in other individuals but certainly in large complex organizations.
Isn't it the change bit that is more important than the sanction, and what is the link between different sanctions and the change? I had seen time and again that there was not a link. It is actually the same point as if you are a good manager: If you shout at employees, what is going to change? Well, they are probably going to be worse rather than better. Simple psychology.
Therefore I looked around for as much empirical evidence as I could, asking: "What is the evidence about how people work and what is the evidence whether deterrence works or not?" You can find a great deal of evidence. In fact in the past five years there is a number of very similar books that have been written by some of my academic colleagues in the United States, the Netherlands, and Israel. We have all been looking at the same science, and actually there is a lot of science.
There is a lot of science in different streams as well. Some is from criminology and some is from business management. There are a lot of business school studies about why good managers are effective, why bad managers are effective, or why corporate disasters happen of any kind. It could be the space shuttle blowing up, Enron, the Global Financial Crisis, various transport disasters, or hospitals, the National Health Service (NHS), etc. What happens? Quite often the answer there is culture; all the risk factors are known in advance, but people didn't think about it.
If you are looking at the science of behavior of people, one should put on one side quite frankly legal philosophy and economic theory because they are all theories and look at what the science tells you, and it is the science of humans, in other words, basically, behavioral science, psychology, and sociology.
All this stacks up. All these different streams actually point to the fact that—Yuval Feldman's book is about The Law of Good People—a lot of good people do things wrong, frequently without intending to or thinking that what they are doing is okay. So we have the ability to persuade ourselves that what we are doing is fine.
There could be a number of reasons for this. Technically it is called "cognitive dissonance," but we could be just focusing on "I've got to reach these targets." If they are imposed on you and if, for example, you are being paid dependent upon reaching certain targets, you are going to be focused on that. The incentives that a lot of economic theory would put on remuneration, bonuses, targets, and things like that, actually frequently produce bad outcomes and not good outcomes because people lose a broader sense of focus, saying: "Oh, wait a minute. I ought to be thinking about something else. I ought to be thinking about, Am I doing the right thing?" or other targets. That is known as "crowding out" in the jargon because the good ideas get crowded out by just focusing on something.
There are loads of behavioral studies on this from all over the place and different situations. There is a famous one from an Israeli kindergarten about imposing fines on parents who collected their children late because the teachers wanted to go home. The end result was the parents thought: "Right. This is just commoditized so I'll turn up late and pay the money. I'll turn up even later, because it's not much money." Then they tried to remove the system and remove the fines, but actually the wrong behavior was instilled so things just got worse.
HILARY SUTCLIFFE: I think also we see that. There is the theory, like often happens with academia, and then there is the world, so the most litigious societies with the fullest prisons are a clear demonstration that this is not actually working.
CHRIS HODGES: I am afraid to say that there are a number of democratic societies where frankly there is a strong link with philosophical and political ideologies of authoritarianism and punishment, and the United States and other places have got a lot of people in jails—the recidivism rates for quite a number of things, especially for young men, are terrible—whereas in other places where you have got enough money and time to invest in reeducation, social aspects about good schooling or good environment, and social support and finding good things for them to do rather than wasting their time doing things that are anti-social, or approaches like restorative justice, sitting down and meeting the victims. All that sort of stuff that comes out of the criminology stream is mirrored in many ways in "How do you run a good business?" or "How do you run a good organization," whether it is public, private, charitable or whatever. You support people.
The behavioral science has actually quite a number of different theories, but one that I found particularly illuminating is called "self-determination theory" by American professors Ryan and Deci, who have spent their whole lives with teams researching many different things. The textbook is absolutely enormous.
Their three concepts are: People need to feel competent, to feel autonomous, and to feel related to others, because we are social animals. So if you say to someone, "You're bad, you're hopeless, you're useless, you don't know anything" and then you isolate them, that diminishes their needs, their feelings of autonomy, competence, and relatedness. Whereas if you say, "Would you like some more training; next time you do it, how about this" and you encourage them basically.
That is exactly the same principle of human beings relating together to produce effective outcomes inside organizations as it ought to be in the external attempt to control organizations, behavior, and consequences, which is the external regulatory environment. It's actually the same attempt to control.
It is entirely understandable that in the evolution of human societies, and there are many interesting books by developmental anthropologists and brain scientists that say when we started to cooperate—and it is human beings who are the most effective cooperators based on our ability to know the difference between right and wrong—and say there is actually an inherent gene. Some very distinguished scientists have said that that almost unique ability is what made homo sapiens different and that it was genetic mutation right at the start of homo sapiens that made us work out who we could cooperate with and who we trusted therefore, and who not. This is the glue that cements together families, later tribes, larger communities, and now very sophisticated groupings of nations or international groupings.
But if you think about the control—and this is where one goes into politics, in other words, the study of power—and think back to the Middle Ages, you've got a king, a boss man, who tells people what to do. It's authoritarian, it's vertical. He rules by fear basically, and if people do things he doesn't want, he beats them up, throws them into jail, kills them, etc.
Maybe this sounds a bit dramatic, but there are too many societies these days that are still authoritarian, so we are still using the same punishment in order to control people. I absolutely believe that if society is threatened, society has to defend itself and use hard tools in order to deliver protection. So if you have criminals, terrorists, Internet scammers, child abusers, etc., in the same way you find dictators who start wars against other countries at the moment, you have to respond, defend yourselves, and protect yourselves in those circumstances.
But the science says if you use the same hard authoritarian tools on the people who try to do good things, then that usually goes wrong. It is ineffective and people resent it, and therefore their sense of autonomy goes down so they don't want to work with you. There are so many examples in organizations and in regulation where an authoritarian approach actually has not produced the right answer and quite often produces something that is worse. It is all explicable in terms of the science.
One of the key points for, if you like, enforcement policy, is to differentiate between good people and bad people—in very simplistic terms—and to use different tools, different responses on them. A few years ago when I started publishing there was a very large 2015 book called Law and Corporate Behaviour, and the first third of the book recorded a lot of science, saying: "This is really surprising. Deterrence doesn't actually work in most circumstances."
But the rest of the book was looking at the enforcement policies of many UK regulators, saying: "Quite a few of them have changed. They are using tools and interventions in other ways that are more supportive," in other words, that would support the sense of autonomy they are usually more effective.
So you see a change happening, and I think, going back to the political point, we are at an absolutely pivotal situation I think at the moment in terms of changing how democracy works, how markets work, how people work, and how society works. We have to be more cooperative because the point that suddenly struck me in the middle of the pandemic was that human beings are more effective and we produce better outcomes when we cooperate. How do we do that?
It was that line of thinking that developed some earlier work that I had done with my friend Ruth Steinholtz, who is a corporate culture expert and former general counsel of a multinational firm, where we were saying, "You've got to do good things, basically, and be ethical."
But the answer to the question about how do we cooperate seems to me to be fairly simple. It comes down to a couple of propositions. One is, we have to know what we are cooperating about. At that point different people have got different objectives, different interests, and different motivations, but we can sit down and talk about it and say businesses need to be successful, they need money, they need to employ people and pay them fair wages, but actually also they shouldn't be destroying the planet, they should support good practice in human relations, employment relations, but look at corporate social responsibility, environmental, social, and governance, net zero, anti-slavery, etc. Equally they need to have appropriate protection, which is the classic function of regulation, so that they are not wasting their money harming people because that is a fairly short term and risky business or enterprise. You can balance all these things.
Therefore I saw quite a few models where people had got together and worked out what it was they wanted to achieve. At that point they can all say, however modest their contribution was to the big plan, "I can do this bit, that's my bit, and I can see how it fits into a much bigger plan," at which point their senses of autonomy and motivation go up enormously.
HILARY SUTCLIFFE: I think I have seen that too in a couple of areas of regulation that I have been involved in. It used to be that everybody sat in their corner and was in confrontation, so a conversation to develop some sort of laws was a really confrontational one, and it was setting the standard of "That's the way it should be and that's the way it should work." You are all trying to get your little inch, and you perhaps triumph by getting one little word changed or something like that, but it does not actually result in a smooth process or a process that actually everybody is happy about. Everybody is unhappy about it. People that we are talking to from the podcast are talking about all these areas that you are helping us understand.
Let's say you are a regulator and you have a new area of technology, for example, because obviously with Carnegie Council we are doing a lot with very new technologies. You can see that you cannot actually use the old command and control. What should they do? What should they be thinking of and what should they be doing differently?
CHRIS HODGES: The model says we should sit down and work out what it is we are trying to do—purposes, objectives, outcomes. Many regulators do not think about outcomes. They are not expected to think about outcomes.
I always remember the very simple quote by an American chief of police, saying: "I can easily get my staff to go out and write parking tickets. Does that make the streets safe?" No. What makes the streets safe? Certainly any safety regulator, environmental protection, or even economic regulator. What are they trying to do, what are the outcomes, and what are the outcomes of the various stakeholders who are involved in this? So you sit down and work it out.
But the regulators actually also need permission to do this because as soon as they start talking to anyone, people start shouting, "Capture!" whereas you have to have a conversation. There needs to be governmental and societal permission for them to have discussions about what are the outcomes, but if these are transparent with appropriate governance and openness, then it should be a stakeholder involvement within which everyone agrees on what we are trying to do.
How do we know if we are doing it? What are the mechanisms? Also, are people trusted, because the second point was that people will cooperate where they trust each other, so how do you build that? The one point that I have contributed to this is let's turn trust around if it is a central element and encourage people to produce evidence that they are trustworthy so they have good systems.
HILARY SUTCLIFFE: That is a really big difference, isn't it? Instead of having to jump through a hoop, you are asking for demonstration. What I like about that as well is that the stakeholders are agreeing on what their expectation is, so there it is not necessary for the nongovernment organization (NGO) to say, "Well, we don't want to do it like that." They have got an agency in agreeing what trustworthiness looks like. That is a whole different ballgame.
CHRIS HODGES: Let me just make one point. If you are a sensible business or investor, you don't want to waste your money and your time on stuff that is risky and is going to produce unnecessary harm and actually destroy the product, service, or business, whatever, but you want to attract and retain staff usually, attract and retain customers, investors, reputation in the marketplace, and society support, so local people appreciate what you are doing and want to work for you.
Therefore, one could have a very efficient approach of actually producing evidence that you have a trustworthy organization because you are demonstrating good outcomes and you are demonstrating good behavior. That could be very efficient because you could pre-qualify for public procurement; you would not have to keep doing it again and again, for example.
But the same point about trust would apply just as much to regulators and governments—politicians always find this difficult—as do NGOs or anyone else. Can you really be trusted? At that point, you actually encourage enormously more engagement, but you should also almost sort of generate a market in thinking about: Well, what other evidence could we produce? What is relevant?
It is quite important that it does not then become authoritarian or blameworthy because we know that when people fear—there are very good books by Amy Edmondson on psychological safety and tons of examples of this—or are afraid of criticism or adverse consequences, then they do not share information.
Aviation safety and nuclear safety have shown how to switch the entire approach from a deterrence actions model to what they call "open and just culture," where people feel supported, they will share information automatically, and that is the only reason planes stay in the sky or nuclear plants don't blow up, because they are constantly monitored with people thinking about, What should we be doing? It shows that this social system can and does work, and therefore it can be replicated in other situations.
HILARY SUTCLIFFE: Also we see when it does not work like Chernobyl and Boeing. That is what was missing in both of those cases. Disasters prove the rule.
CHRIS HODGES: Yes, yes, and there are plenty of NHS reports about hospital trusts where things have gone wrong. It is always the same ultimate issues here.
Therefore, what should people do? First of all, one needs permission. One of my dreams is to get outcome-based cooperative regulation approved by governments. I gave a talk on this at the Organisation for Economic Cooperation and Development (OECD) quite recently, and they are certainly interested and are supporting quite a number of regulators or sectors—the food sector, information, data, medical technology—to move in this direction. We need a bit more institutional change to give permission and for people to sit down and start talking.
I would not say it is always easy. There is a very good example of a good friend of mine, Srikanth Mangalam, who worked in Canada with a really innovative program of the Ontario government about putting in place new rules for heating and heat engineers. The engineers were moaning for years about the fact that the system did not work, and the Ontario government actually said: "Okay, you sit down, form a committee, we will be secretariat but we won't share it or run it, and you work out what the answer is."
Of course everyone came in with their preconceived views, which were "I am going to defend my particular position, my business, or my approach," so it took time to build trust and engagement, but everyone then saw it as an opportunity and indeed a responsibility to get a system that worked, not only that but one that worked for the public good and that would deliver efficient outcomes but also the right outcomes. It has been I believe extremely successful.
These things can take time. Various psychologists say, "Actually when you are sitting down and talking you may get more tension and disagreements, but it is how you manage it," and you must manage it with respect—or maybe you need an independent mediator or something. No one is saying, especially in the difficult things like artificial intelligence (AI) or neuroscience or whatever—these are new. We do not know what the risks are. We do not know how to control them. We do need to talk about it. There will be differences of opinion, and it is going to take time, but if you do not sit down and talk about it, it is going to take even longer, and when things go wrong you do not identify and fix them so quickly, so it is that sort of approach of a constant circle of identifying problems and just fixing them together.
HILARY SUTCLIFFE: I think that is one of the findings of the work that I did, that regulators are actually trusted, they have got public trust as what we called a "trusted environment" for collaboration because you need a trusted environment, you need people to join together in the collaboration, especially in something as complex as some of these new technologies, to feel that actually they are going to be listened to, that it is going to work, and that it has got a public good purpose rather than just smoothing the path of some innovation for the ends of companies.
It was quite exciting when I was reading your work just to see the different examples of where in even quite high-conflict areas there was a collaborative approach. I think the secret, as you have told us, is having a joint outcome, having an outcome that everybody can develop together but also something that everyone can buy into.
I know you have been doing some more work in some of the new areas like medical devices, medicinal cannabis, and all sorts of interesting new areas where you have achieved that. Give us some tips or what-not-to-dos and what-to-dos to do this development process in collaboration.
CHRIS HODGES: I would be honest and say that we are at an early stage of doing this. There is no absolute blueprint. Every situation is different and will need different solutions and take time. There are some sectors where you are able to make progress more quickly than in others. There are some that are global and that involve a lot of money and incentives where there are obvious inherent barriers, so financial services is not where you start. New things like global AI, global platforms, and all the rest of it present obvious challenges, but I do see just at the moment a few moves toward more engaged discussion around where do we put the conversations, who needs to be in them, should they be more open, involving more stakeholders, and be more international.
The ones that have worked quite often start off with just two personalities who "get it," as it were, only a regulator and a business who engage with each other and trust each other. A good example of that is the pricing review of water in Scotland between the Water Industry Commission for Scotland and Scottish Water. In that case there were only two bodies. They are both publicly owned, which helps. The two CEOs knew each other and trusted each other. They were able to sit down and ask, "What are we trying to do?" which led to a bigger involvement, and they created a consumer panel, which is leading to more discussions with local communities in Scotland, so size is an element of this. If you go to a society that is smaller, people know each other more, and it is cohesive like Nordic countries, then it is much easier to do. The Netherlands is similar with a bolder approach and a cohesive society.
Size is certainly relevant, but the Scotland Environmental Protection Agency (SEPA) has done a spectacularly good job and is starting off with another one now, much more challenging and bigger, on getting different businesses, different communities, educators, and transport together, saying: "What's the local plan? Let's all agree to it." So it really was a community effort there in that situation.
That particular one had effectively a coordinator, who was the local environment inspector, whose obvious interest was that she wanted to reduce environmental pollution from the old oil and gas industries as they were decommissioned. So she has achieved her goal, but everyone else has achieved loads of other goals in the process.
Exactly what that form of discussion and cooperation might look like differs. I am certainly thinking—in other contexts like social housing or property—about creating some form of stakeholder council at a high level, which would be maybe statutory or maybe less formal. I also think that if one has a regulator there you can have a regulator who is a system regulator, making sure the whole system is working and everyone is doing the right stuff and therefore does not quite regulate in the same way as might have happened 20 or 30 years ago because they are just making sure that everyone is doing the right thing and doing their bit because everyone is then cooperating.
Ombudsmen are a key part of this because in many markets these days—in the United Kingdom and Belgium in particular and less so in many other countries—we have consumer ombudsmen in financial services, energy, communications. Consumers can go there and get independent good advice. They will get their problems sorted because the ombudsmen can then decide cases in a very non-legalistic way, a fast process but totally independent and fair. You do not need to use courts or lawyers in these circumstances, and typically the ombudsmen are free.
The ombudsmen then use all the data they get. Some of it will just be the questions people are asking on the website, some of it will obviously be dispute data, and all that is aggregated, which tells you what is going on in the marketplace, so it is a huge monitoring tool. They make this available publicly, saying, we have got these problems, that company has got a problem, or they make it available to the regulator and they talk to the companies, at which they point they say, "How are you going to reduce your numbers next quarter?" So it is a self-reinforcing system, especially with a code of good practice in the middle of it that is terribly effective, but it is engaging. I wrote a chapter about this some years ago, basically saying "consumers as regulators" because the data they put in is critical to this.
HILARY SUTCLIFFE: I think that is what is exciting about it from our point of view too, looking at how society shapes regulation because we hear a lot about ethics-washing, green-washing, and trust-washing, and getting together the different stakeholders to say what the expectations are and agreeing what to actually look out for really takes away that. If you are only just talking about it without delivering the evidence of trustworthiness that you and stakeholders have agreed is important, it takes all the –washing bit away because we have all agreed what the expectations are.
I think at the moment, including some of the new areas, we are not clear about our expectations. There are ethical codes and ethical discussions, but they are almost a distraction from what behaviors we are looking and seeking to have organizations do.
I don't know if you want to go through them particularly one by one, but you have the six elements of outcome-based cooperative regulation. Do you want to just sort of summarize it for us briefly? Is that fair? Is that reasonable?
CHRIS HODGES: The basic approach is: Sit down and work out what the common purpose is, which you agree on, of the various stakeholders, purposes, objectives, outcomes. How do you identify if you are achieving the right outcomes or the wrong outcomes? Are people getting hurt?
What is the evidence of trust here? One therefore creates relationships that are based on trust. It is possible then to, if you like, have a trust "track" or a trust community. There are some risks in this because one does not want to exclude people like small businesses, who do not have as much confidence as a large business, and may not have the time or the resources to produce much evidence.
So it has to be proportionate. It has to be fair, but it could also build capacities, a good example there being the Essential Services Commission of South Australia, which has been doing this in implementing water and energy regulation on small groups of homesteads or businesses in the Outback in Australia. No way would you impose all the normal regulatory controls that you would on an energy generator or transmission system, but they are able to build up competences.
The code approach basically boils down to: You commit to doing the right thing; if anything goes wrong, you ring the regulator straightaway, at which point people will come along and help you solve the problem.
It is almost as simple as that basically, and obviously you can expand on these things and give more examples—guidance, etc., is very useful. I am not saying there should not be a system with rules because rules are important and sometimes hard legal rules are important, but it depends on the circumstances.
You get people together and then you just monitor and you get a system, asking: From where are we going to get the data? Is it an ombudsman system? Oh, the data is all over the place; maybe we should pull it together, so you need a single ombudsman, for example, or transparency from what data is in the companies, what data is all over the place because the data is going to be very effective if you get a larger database.
HILARY SUTCLIFFE: This is not science. Citizen contributions to this are really growing because they are the ones that are bringing it and seeing it, but they are not necessarily heard in a legal way.
I did really like in your book Ethical Business Practice and Regulation: A Behavioural and Values-Based Approach to Compliance and Enforcement the Scottish Environmental Protection Agency's (SEPA's) little chart on compliance because I like the point of this trust track, that the people who are trying and delivering get treated differently. I think there are six different categories: criminal; chancer (I really liked that one); careless; confused; compliant; and champion.
Quite obviously all of those different people do not get hit with the same blunt stick of assumption because they have different needs and they will respond in different ways. What I liked about your work as well is this nuance that will help people who need helping and actually will do different things to people who are misbehaving in different ways. I think that is a powerful and interesting model to look at.
CHRIS HODGES: Yes. You say, "What's the problem?" I agree, the SEPA chart has been widely quoted and is very striking. I think we can actually improve on it these days if we apply the latest science, but that very definitely is the direction of travel.
The major point about that is not just looking at motivations, you also need to look at competencies and resources and then tools, intervention. The CEO of the UK Office for Product Safety and Standards, Graham Russell, has developed with colleagues a concept of regulatory delivery and regulatory delivery model, and he has worked with a number of developing countries and the OECD in this.
I spurred him on to writing down the model in a book called Regulatory Delivery. In that he says that regulators need governance, transparency, outcome focus, and a risk-based approach—which are more important risks that others? You need data to work that out. His is one of the few agencies that actually tries to report on outcomes: Are we improving the outcomes for product safety or not?
One of the things that I absolutely love about that is that he deemphasizes the word "enforcement" and says, "What about 'intervention choices'?" So you use the right tools to support nice people whereas if you have a criminal, you go in hard and remove them basically.
Similarly the Civil Aviation Authority almost never fines companies, whether it is traffic control, which may be a public body, airlines, airports, or people who are involved in fuel. Instead they talk to them, and then, if it is not working, they remove licenses or qualify licenses to operate, and that focuses the mind enormously on getting the change that needs to happen to restore trust and reduce risk, saying: "Okay, we now think you are all right. We will trust you to land planes over a certain weight, but until you do that and satisfy us we won't."
One can deploy interventions, but to labor the point right at the beginning it isn't deterrence. There is a very important political point here, which is, what sort of society do we want to live in? I believe that deterrence and ruling by fear are medieval concepts that should not have a place in any modern democracy, and you only have to look at Russia at the moment to say: "Do we want to live there? No."
This takes you toward saying shouldn't we have a more open, level playing field of individuals who are responsible and cooperate with each other? That means actually taking away authoritarian ideas like deterrence as civil aviation does and using newer language like "interventions," "outcomes," and "risks."
Certainly data and new institutions work better to deliver the right outcomes, but it does mean involvement earlier on, and surely that is a better society. It does not mean that suddenly everybody has to think the same thing; far from it. But it sees a respect and discussion there. That is the sort of society that I think is going to be exciting, fulfilling, and innovative.
HILARY SUTCLIFFE: What a fantastic place to end this podcast, Chris. Thank you very much indeed. We will put Chris's book and some of the links and references that he has mentioned in the podcast underneath.
I would just like to say, Chris, thank you very much for coming, thank you very much for talking to us about this fascinating subject that really will help shape the society we would all rather live in. Thank you very much indeed.
CHRIS HODGES: A great pleasure, Hilary. Thank you so much.
Voir la vidéo complète
Carnegie Council for Ethics in International Affairs est un organisme indépendant et non partisan à but non lucratif. Les opinions exprimées dans ce podcast sont celles des intervenants et ne reflètent pas nécessairement la position de Carnegie Council.