Hello and welcome to Elucidations, an unexpected philosophy podcast. I'm Matt Teichman, and with me today is Witold Wienczyk, Consulting Director for Development Innovation Lab at the University of Chicago. And he's here to discuss statistics in academic research. Witold Wienczyk, welcome.
Hi, thanks for having me. So this is a topic that's near and dear to my heart. I have to say, one thing that's been kind of bothering me for, I can say like the last 20, 25 years in our popular media is that there seems to be sort of an arms race in overconfident language.
People will say something like, this study shows X, Y, Z, and it'll be like one experiment. And like, you can't conclude that the thing is the case after one experiment. And like, you know, this shows word typically is reserved for cases where like the thing you're saying is showed is actually true. That kind of has what philosophers would call a fact of presupposition.
And I feel like we see this all over the place. Instead of saying something like, oh, there was a study and the results are suggestive of X, Y, Z, people will be like, well, we now know X, Y, Z. We're like, I don't know. Are we really in a position to conclude that with really strong certainty yet? So anyway, I wonder if – is this something that you've also noticed in our media culture that there's a hesitancy to say that there's anything we don't know and a pressure to act omniscient?
So I think there is a few layers to this question. First one being that generic point going beyond any media depiction of scientific facts is that the language interacts with statistical knowledge, which I guess we'll be talking about here mostly, in a quite imperfect way. The obverse of what you describe, which is my particular pet peeve, is
headlines where no evidence makes an appearance. And also in colloquial language where we'll say, but currently there is no evidence for X, which typically people will take to mean that there is actually negative evidence. There's evidence that it's not the case. Rather than absence of evidence. And the way we discuss science is so...
good at conflating all kinds of conflicting claims. And then what you describe is the degree of certainty, which is also very hard to express in just regular conversation. But I think apart from everyday language, it's interesting to think about this from the perspective of practice of science and scientists.
Or since we will be touching on economics in this conversation a lot, do a supply side analysis. Because if I'm thinking about demand side for those media headlines, you would think
a priori thing, well, to say study shows or study suggests is the same number of bits of information. So why would we have a preference for one or the other, right? So if we start from the supply side of scientific facts, it seems like there is obviously a very competitive market out there to break through the noise. And
It probably highlights the general trend in academia of there being more scientific facts and also more scientists competing against more noise. So like if there was a case where we could be really confident in the results, that would get more clicks. And so there's pressure to act like we actually have gotten a result that that's robust. Yeah, there is more value nowadays to creating and
atomized scientific fact because there are outlets out there that are specializing in creating that product, right? But it doesn't, to say that doesn't really answer why culturally we do have media outlets that specialize in packaging atomized scientific facts into things that people will click on, right? And I don't have a great take on this, but it's
It's interesting that you could make almost conservative or reactionary critique of this entire process, where as science is becoming something commodified and popular because of the assumption that a regular person is meant to be interested in science. So it's maybe a bit of a chicken and egg problem. We assume that people care about scientific facts.
Or maybe we assume that people should care about scientific facts. And then the role of science or scientists, rather, slowly becomes to be a supplier of those atomized facts which are packaged in a way in a
lowest common denominator kind of way so that it's as accessible as it can be to a regular consumer of media news. This probably differs from a model of science as it was in 1920s, even though the concept of popular science also existed then.
There was like maybe less meddling from the public in the process of the deliberation, maybe because people felt less like it was their job to have an opinion about the stuff that was being debated among scientists? I still think about this from a viewpoint of a scientist as a supplier rather than demand. Someone who's trying to second guess as to how to make their product most attractive.
Well, like one thing that I found when I was doing my PhD was there was tons of pressure put on especially junior researchers, but also PhD students to like always be having a breakthrough. And frankly, it was like exhausting. Like, can you have a breakthrough every three minutes? Can every single person in a graduate student cohort be revolutionizing everything? I mean, it seems to me no.
there was this feeling that like, well, we have to get the best for every position and we have to put out the best for our latest journal issue. And the best equals radical breakthrough because those are the things that we've enjoyed in the past.
What's the role of the breakthrough versus the interesting paper that builds on some previous stuff and might be useful later with more modest pretensions? Yeah. So this is reminiscent of a hypothesis that you and your listeners probably know by Ortega y Gasset, a Spanish philosopher from before Second World War about how science is...
proceeding in a cumulative fashion through actual of small discovering or addition of small facts rather than through Copernican revolutions every hundred years. I think people nowadays are more
in favor of this mode of thinking because academia is just a much larger universe than it was a hundred years ago and a number of PhDs per discovery is growing at a steady clip. It sounds flippant but this is things that people have actually graphed because the
inputs into any single discovery are getting more heterogeneous and larger in volume. So you just need more people to produce those inputs. But it's a funny philosophical tension here, right? Because we still operate under the model of science as aesthetically speaking, moving in kind of breakthroughs and leaps and bounds. And at the same time,
Not only the environment of doing science has changed, but also the mindset is of this tailorization of production of knowledge. It is a thing which is in terms of its practices, working practices, is getting more and more rigid and proscribed.
Maybe to spell that out for people who haven't heard of Frederick Taylor, Taylorist is a word for the kind of thing that happened in the early 20th century where a lot of individual fabrication processes were mechanized and the assembly line was invented so that things get mass produced in factories. So is that right? What you're alluding to here is that academic research is more like it's coming out of a factory now.
Yeah, you can produce papers, scientific papers, especially in harder sciences, in a completely algorithmic fashion. Science has been commodified as a product, and this product has sort of created a number of efficiencies in producing those papers, right? And we've all seen them, everyone who has interacted with academia, those super successful professors who were
producing 25 to 30 papers per year. There's, of course, a whole separate conversation as to who's actually producing those papers. But you wouldn't really imagine the old model of science in the early 20th century as any scientist, no matter how brilliant, producing 30 to 40 pieces of work every year. It's a production line, a short version of it.
So you think maybe that there's a parallel between this phenomenon that we're on about now, namely that new researchers feel pressure to be transforming everything with every single paper they publish and kind of the issue we started with, which is that people reporting on current science often feel pressure to exaggerate the level of confidence we should place in the results. Does that come from the same place? Yeah. So,
We spoke a little bit about the supply side of science and scientific products. Now, to answer this, maybe it's good to briefly go back to the demand side. As we said before, it seems like if I'm on the receiving end of some scientific facts, scientific information, it's not a priori obvious that I should be more drawn to
more certain information than less certain information. It's an imperfect analogy, but as humans, as species are designed to crave uncertainty, if we think about how dopamine works, like people are uncertainty consumers. That's right. The first thing I thought was like skydiving. Like you want to like not know if you're going to live for sure and like, and have an adventure, right? It's this impulse to adventure. Yeah. And on the
sad end of the spectrum are slot machines, right? And why people get into gambling. Right. I could know that I'm going to hold onto my money, but no, it's more fun to not know if I'm going to hold onto my money. So why does this, it's slightly perverse to ask this, but why does this principle not hold when it comes to us experiencing scientific knowledge and both in terms of
as as people in academia trying to find things out or as as consumers of facts trying to find out information. Do we actually crave certainty? Do we hate ambiguity? I feel like psychologically that's slightly unresolved question. Yeah.
Right. It seems like in different contexts, sometimes we crave uncertainty. In other contexts, we flee in terror from uncertainty. And figuring out when which of those things happens, maybe it's a little bit tricky. Therefore, I think a really productive critique of this starts from the supply side, as I called it, and the institutional side. What is making us act this way? And what I find curious about this, and here maybe we are now segueing from...
very general points about science to talking about statistics is that in sciences that utilize statistical reasoning and statistical analysis of experiments, for example, there are many initiatives for fixing science, but interestingly not so much for fixing scientists. What I mean to say by this is
we put a lot of focus on things such as replication of studies or practices that will make people publish less attractive results. And this is great. This is institutional fixes to improve the standards of science as a whole. But interestingly, this still means that the scientific field is essentially divided into policemen and research bandits. That's pretty funny.
I don't have a great take on this, but it's interesting to ask whether there is some innate difference between different temperaments in academia where we will always have this divide. Or is there some institutional incentive which is turning regular people who could sit down with uncertainty in normal circumstances into sort of peddlers of certainty and scientific health truths? I don't know. Actually, what do you think?
Yeah, so I'm not sure at a global scale. I will say that anecdotally, in terms of what I've experienced, these social conventions can really vary from, at least from discipline to discipline. So to take an example, when I was studying philosophy, I think there was a strong expectation that any speaker giving a paper would be like, really, really, really confident about the argument they were making. You see it all over the place. At conferences, you see people basically saying,
Everyone's gotten it wrong before me for the past 2,500 years. I finally figured it out. We're done now because I figured it out. But, you know, lately as I've been getting more and more interested in computer science and at least in programming language theory talks, people are way more tentative. They're like, oh, you know, I have this new idea for construction. Here's how it works. Actually, last night, I think I mathematically proved that it fails in every case. But maybe you guys could, like, help me fix it.
I'm not sure what the takeaway of that is from those two contexts, but at least suggests to me that there's huge potential for variation. I think there is an interesting takeaway here in that this issue can be solved by institutions and is very culturally specific to different domains. I think the thing to say here, which comes through in your example already, is that you might initially think that this maps onto hardness of science. So,
I studied mathematics and in mathematics it's quite normal for someone to come in with an attitude of subverting existing knowledge and this is done without any amount of peacocking either some facts are true and some facts are false and you have to present them with absolute confidence. Interesting. But then
Something that's in the middle of the spectrum of hardness, which I have been interacting a lot in last few years, which is economics, has conferences, which to me, when people describe them to me, sound like duel to death to present your paper. And there's definitely this mode of having to show that you're absolutely right about everything and you thought about every possible thing
logical objection to your paper. And we're not talking about a particularly hard science here. We're talking about something which is completely assumption and model driven often. So the rightness and wrongness doesn't really enter into it. Yeah. And where it's hard to get like repeatable experiments and laboratory conditions and all that, that's another real difficulty, I guess, in economics. I would say that the presentation doesn't map onto the degree of
in that you have to present any fact, no matter how uncertain you are, as ultimately true, to break through the noise. And from my experience working in bioinformatics and medical sciences as a statistician, those conversations, they don't have to be so intense. Yeah, right.
This part of the conversation reminds me of a fascinating example which I know a little bit directly from my work with Michael Kremer at Development Innovation Lab. Michael, as some of you might know, is a Nobel Prize winner in economics in 2019. He's a super accomplished development economist and interestingly,
Some years back, some years prior to his Nobel, in fact, he had an episode of one of his major findings being found to be wrong. Or to be more precise, one of his papers was found to contain a number of mistakes.
And what's very interesting about this case, especially because it happened in a discipline like economics, where, as we said, all of the claims of certainty have to be made in this completely unwavering way. Michael very graciously produced the update to his paper. He's redone his analysis.
He explained whatever mistakes in his data, and he just refreshed his analysis. And he collaborated with people who were criticizing his research by giving them full access to his data. And lo and behold, he came out of this scientific disagreement with his reputation enhanced, precisely because it's so uncommon for people to engage in dialogue with people who are trying to pick their paper apart. And I think this story, to me...
It's a bit like an analogy to something we hear in the media a lot. It's going to be a weird analogy, but I just can't help but think of it. There is a lot of talk in international relations about giving people off-ramps to de-escalate conflicts. So when there is an intense disagreement, the way to resolve to sit down together and have talks rather than escalate the conflict
conflict is apparent to both sides at all times. And it's a weird analogy, but it seems like in academia people are completely unaware of existence of an off-ramp. You see this over and over when people's research findings get criticized, there are data problems, or sometimes they find to have fabricated the results. Sometimes it's something more innocent. But
invariably they go for the nuclear option. Yeah. Which is really scary in this analogy. They try to personally attack the person that identified the mistake and claim it's definitely not a mistake, et cetera, et cetera. So it seems like in our academic institutions, people need more practical cases of people making mistakes and then...
the right thing. Yeah. I mean, really, if somebody points out a mistake in your research, the first thing you should do is thank them, in my opinion. Because, like, I don't want to be making mistakes in my research. But then what happens is if your goal in the back of your mind when you're doing research gets twisted from what it should be, which is to learn the truth, to whatever, getting lots of acclaim for your important work or something, then the incentives can be aligned such that you feel like you can't thank the person for...
Noting the mistake in your work. So we were talking about what happens in kind of like the good case, the case where everyone's acting honestly with good intentions and somebody published research that contains a genuine mistake. Another person pointed out the mistake. And then what they do is they collaborate together on a follow-up study that rectifies the error. What would be an example of research where everyone isn't such a good Samaritan and like the
the original author digs their heels in when the error is pointed out, or maybe the data are straight up falsified. So the first and most obvious is fabrication of data. This doesn't really occur that often when it comes to statistical analysis, but you'd be surprised still that with people having all of the more complicated tools to fabricate claims available, they will go for this lowest hanging fruit.
A really cool example from last year was Dan Ariely's paper, which is "Signing at the beginning versus the end does not decrease dishonesty," which is an analysis of insurance data. Well, this dishonesty paper turned out to have at least part of its data completely fabricated through a random number generator in Excel.
Wow. And as soon as someone plotted their data, it was obvious that someone just made this up. I have to say he's one of the three authors on this and it's completely-- I don't think it has been determined who fabricated data on that paper. So this is just an example of things that do happen without actually pointing finger at anyone there.
But then why would you even have to fabricate data when you can be completely selective in choosing data for your analysis? As long as you can get your hands on any data. It's usually quite hard to show that people have engaged in cherry-picking of their inputs. You would have to have access to source data to prove this.
Are there techniques out there for analyzing data? Like you could maybe try to measure the level of randomness in it or something to just see sort of mathematically whether it's plausible that it could be not cherry-picked? Yes, and this is one of the most fun, I mean, if you're a boring person, this is one of the most fun things you can do. This is a safe space for boring people to really nerd out.
There is a whole branch of forensic analysis of tax records, etc., where you will look at distribution of not just the values reported, but distributions of digits. There is a really beautiful law in mathematics called Benford's Law. Is that the one where everything starts with one?
No, but it's one of those. Which, yes, which is about distribution of digits and numbers. As far as I know, people still don't quite understand why this happens, but it's quite easy to pick out things that have been fabricated by people. Usually when it comes to fabrication of random-seeming data, people are incidentally very bad because they will overcompensate
And you can do a really cool thing with your friends and ask them to make up a sequence of tails and heads. Usually you can get really good at detecting which sequences of tails and heads have been actually generated by flipping coins and which have been made up by people. And the difference is that people will always try to make it seem more random
Whereas if you start flipping coins, very soon you'll have four or five or six heads or tails in a row. And people can't sit with this non-random randomness. Anyway, going back to data fabrication, yes, there is cherry-picking and that is probably as egregious as making up your data. But the much bigger problem, because it's pervasive...
is that people optimize for interesting results at the level not of cherry-picking their data from an existing dataset, but finding datasets that are interesting. Recently, people might have seen this headline which all the major outlets picked up about a hundred grand being paid to very famous economist Alan Kruger to publish a
paper on Uber, which was quite favorable in its conclusions to ride hailing apps as an alternative to traditional taxi cabs. And Kruger has been criticized widely for taking money to write a paper
I don't have a strong opinion on the issue of someone taking 100k to write a paper in economics. You can make up your own mind whether this constitutes a conflict of interest or it actually is productive because this paper wouldn't have got written otherwise. But what's interesting is that people are not really picking up at the...
data side of projects like this, which is increasingly the case in academia, namely companies providing researchers with datasets and probably already anticipating what sort of conclusions people are going to be drawing from those datasets. So here even without any conscious research misconduct or any intention for research misconduct, there is
absolutely potential for fudging up some results because the access to information is completely controlled by the private sector. So we've looked at a couple different strategies for publishing a fraudulent study. One is to just fudge the original input data.
Another one is to sort of cherry pick so that you've got real data, but it's curated so that the statistical trend in it seems to be whatever the author of the paper would like it to be. Are there other ways that you can fudge a statistical study besides these two? For example, maybe even with perfect data? Yeah, this is where the real fun begins for a statistician.
There are criminals out there who have to use very crude tools and there are people who can do their crimes undetected. And now we're in this category. So sometimes you have to marvel at people's ingenuity in misapplication of statistics to prove something. However, probably as with actual criminals, in 99% of cases,
people are still using pretty crude tools to make up their results. So how does it work if we assume that you can have perfect data and you can still get to whatever answer you want? Number one concept here comes under umbrella of what we would call researcher degrees of freedom. So what is the number of ways I can analyze my data?
which are slightly different from each other and might lead to different conclusions. Now, if I give you data on some new miracle drug, or maybe there is a new, let's make this up, a new meditation app that has been downloaded by 10,000 users, and we want to show that it has some beneficial outcomes for people. Makes you live longer or something like that.
There you go. Yeah. Why limit yourselves to makes you live longer? Why not? It makes you earn more money or it makes you sleep longer or it makes you sleep shorter, which can also be good. Or it makes you more relaxed or it makes you more relaxed in the afternoons or it makes you more relaxed in the mornings.
This is one aspect of the degrees of freedom. But then measuring things is really cumbersome. So we don't even have to bother with measuring all of those things. What if we only measured one thing and we have those 10,000 people? Okay, so we cannot find the increased relaxation levels in general population. Why don't we ask about relaxation levels in men?
who are aged over 40, who are second-generation immigrants. You start cutting your statistical sample into finer and finer subgroups until you find something. And the way that statistics works is you're guaranteed to find something eventually. So you can always find positive claims
as long as you're not too bothered about what kind of positive claim you wanted to make. That's interesting, yeah. Sounds ridiculous, right? But this is part and parcel of how statistics is used in many academic domains. A lot of experimental psychology claims are of this variety, where some data has been collected, some hypotheses have been made post hoc, and
Lo and behold, they actually happen to be true. So I feel really pulled in two directions on this because on the one hand, it feels like there is something probably that's dishonest about being as open as possible about what hypothesis you're trying to test and only nailing down any kind of hypothesis after the fact based on which of these possible hypotheses could be corroborated by the data you've gathered.
That feels a little weird. It feels like you're just looking for a win. But on the other hand, isn't there a legitimate scenario in which you start off exploring one possibility and that leads to a dead end, but then it leads you to a genuinely interesting discovery that's different from what you started off with? Like that's the spirit of scientific discovery, right? Absolutely, right. To proceed in that manner. Like be open to accidents. Yeah. So like what's the difference between being open to accidents and then doing this more craven thing
being deliberately being wishy-washy about what you're checking for at the outset. So statisticians have a way of coping with this, which is now becoming finally a standard practice for, in many journals, to go to this ridiculous example I constructed earlier, it's called multiple hypothesis testing corrections. So that a researcher here theoretically can understand
go on and test any number of hypotheses, but they will do a mathematical or statistical correction or they will penalize themselves in their calculations for the fact that they haven't tested one hypothesis, they tested 20 things. And statistics, the nice thing is that in statistical inference we know
how likely we are to get spurious results as we increase the number of hypotheses. So we know what kind of correction is needed as a function of the number of hypotheses we tested. So the example I gave of multiplying hypotheses and always finding some positive claim and then getting to publish has very nicely been referred to Andrew Gelma at Columbia University who often has
pretty fascinating, takes on those issues and I would encourage for people to look him up as the garden of forking paths after that famous Jorge Luis Borges story, which basically expresses a slightly more general principle of researcher having
multiple stages in their analysis and then they can always just decide to go left or right, left and right, left and right. And if you read the Borges short story, we get to the infinite number of universes by multiplying those choices. I think I'm probably butchering that story that I haven't read in years. But it's important to say that we're talking about an arms race of sorts here where a single practice of
for example, making people correct for the number of hypotheses they made, only means that they're going to find another outlet for making scientific facts using statistical analysis. Because the number of those decision points is infinite. The modern practice for this is for people to keep themselves honest by
Not just in front of their peers, but in front of themselves by pre-registering the kind of analysis they're going to do. But again, this goes back to your question, which I thought was brilliant, that this is creating a tension with the process of scientific discovery. So we have to also allow for this tension.
iterative process, for the creative process to come into data analysis. And this probably goes back to this earlier part of the conversation where we were talking about the divide in science between the policemen and the bandits. I guess it's an open question whether you want to incentivize people not to break the law, as it were.
Or you just want to make them a better citizens in the first place? Yeah. So is there a jail for researchers or is there a possibility of redemption? Or do we just create norms that actually make people commit crime less? It's funny to frame it like this, but those are valid questions for actually turning out
good science. And it seems like the mechanisms we have for that now are basically public shaming and the person gets fired. That's jail for a researcher who commits fraud. I think it can be more of a Catholic Church version of we just pretend that the problem is not there until it gets really bad. In the meantime, we're just going to
shift this person somewhere else. Yeah, yeah, yeah, yeah, right. Or deny someone tenure and then they start working somewhere else. Yeah, yeah, just like, as long as it's not my problem, it's not a problem kind of attitude. So sometimes people getting fired is the best thing that can happen, right? But it's still not a great thing. Yeah, absolutely. If this is the only recourse that we have where we say that the research misconduct
It's sort of this binary thing where there are just some bad apples there and everyone else just getting along with the program. All the gray area cases, even the dark gray cases, we don't care about. But then only once in a blue moon, some individual gets scapegoated. That seems like a bad system. Yeah. So we've uncovered all of these kind of death traps that are waiting for anyone who wants to draw on...
heavy duty statistical machinery to arrive at a, you know, policy conclusion or like new directions for scientific research or funding for stuff. There's all these like perils on the one hand. On the other hand, we need statistical machinery to help us make these decisions. Like it's unavoidable. So like, where do we go from here? Are there like, what recourse do we have to deal with these various forms of institutional erosion that we've just been discussing? Yeah.
I think anyone who ever had a statistics class for their degree will appreciate the fact that statistics are not taught very well, but when people get to interact with statistical analysis, they usually are encouraged to think in terms of stats being something relatively easy.
We try to simplify those issues so that an average, absolutely no offense by picking the first thing that comes to mind, but an average psychology researcher can publish their paper. So we teach methods as if they were quite standard and you could follow a flowchart to conduct your analysis. And it is the same in medical statistics or empirical economics.
And the first thing to say is that we have to treat statistics as something that's very hard. You have to just dispense with this conviction that this can be done by anyone. And of course, I would say this because I'm a statistician and I make my money by the fact of helping people who are not statistician do their statistical work. Got to keep yourself in business, right? Yeah. I mean...
That's why I'm here. However, I totally acknowledge my bias here. I think having a dedicated statistician in those projects in academia often takes the heat out of a lot of research misconduct potentialities in those projects. Of course, where fraud is going to happen, if the senior researcher has an intention to commit fraud, it's going to happen anyway. But I think in terms of
research groups that are determined to actually get to the bottom of things. The model where the statistician, the analyst, is someone a little bit external, in my experience, is often cast through a lot of issues with analysis, with people being under pressure to exaggerate claims, people being under pressure to finish their projects, meet the timeline of grant, etc., people having big egos.
externalizing this process can be really good. So I think we don't commonly think of the roles of a researcher, of someone generating hypotheses and the person analyzing data as separate. But having a bit of this firewall could maybe cut through a lot of those issues, even the institutional issues with how the scientific facts are packaged in this extra degree of confidence
that we have to pump into every result that is being presented. That's really interesting. I've never heard anyone propose that before, but it immediately suggests a parallel in computer science. So the parallel would be there are certain areas in computer science where you really just want people who've devoted their whole careers to work on that. One example would be cryptographic algorithms. If you're writing a to-do list app that somebody can use on their phone, you don't want the people who are writing that to...
come up with new cryptographic algorithms that the app is going to use. That's really finicky business. Like you want experts who've spent their entire careers getting all the little fine points of the cryptography algorithm, right? Because otherwise your thing is not going to be secure. So there's certain areas where if the work is really detailed and like easy for you to get wrong and high stakes, high consequences, if you get it wrong, you want to really like have a division of labor where specialists do that part and you just use what they come up with.
And so it seems to me that like maybe in sort of like pure intellectual research areas, there's a similar situation where heavy duty stats, number crunching is involved because it's so finicky. It's so error prone. Beginners make huge mistakes. It's hard for them to avoid making huge mistakes. And the consequences are dire when those huge mistakes happen. This is maybe another situation in which it's helpful to have that division of labor. The really funny paradox with statisticians is that unlike economists,
finding a consultant to get your project done. Here we're talking about people finding a consultant to try to break their project. Yeah. The intellectual attitude we're talking about here is that I collected a bunch of data, I have a hypothesis. Now, I'm asking someone to think about this in a semi-adversarial way. If there is a result, does this result check under every possible
permutation of assumption around those research degrees of freedom. So, yeah, yeah. Vytold Vientzek, thanks so much for coming on. Thanks for having me. This was really nice. The Elucidations blog has moved. We are now located at elucidations.now.sh On the blog you can find our full back catalog of previous episodes. And if you have any questions, please feel free to reach out on Twitter at elucidationspod. Thanks again for listening.
Thank you.