Alexey Kapterev: Critical Thinking 101 (Part 2)

On April 25, as part of the “MSU 2020 Training Day,” Alexey Kapterev made a presentation on critical thinking. We present to you part 2 of the transcript of the lecture.

Part 1 is available here .



There is no single answer to this question.

I read a book by Joe Lau, a professor from Hong Kong, “An Introduction to Critical Thinking” (available in Russian in the Sberbank library). It describes a four-step process:

Types of evidence, generally worth believing. There is no single classification, but I highlight:


There is a question that I am most often asked:


It seems to have two separate questions:

  1. What should convince best? (How to)
  2. What actually convinces best? (As it really is)

I'll start with the second question.


I very often hear that stories convince best. In fact, there is nothing further from the truth. In some contexts, they are convincing, but for the most part, stories are the least convincing evidence. Mostly people are convinced by statistics and logic.


Stories influence intent. Stories motivate well, but not convince. The difference between persuasion and motivation is roughly the same as the difference between the phrases: “I believe that smoking is harmful” (persuasion) and “I am ready to quit smoking” (motivation).


If we want to convince people rationally, then we convince with statistics, logic. If we want people to do something, we need to tell them a story. Something visceral that takes for the soul, for the living.

Most recently, I saw the results of a fresh, very good meta-analysis:


That is, people tend to make decisions about health on emotions. For some reason. Although it seems to me, this is the last place where you need to make decisions on emotions.


We can recall the metaphor of the elephant and rider. The elephant loves stories very much, the rider loves statistics or logic, it depends on who we are talking to.


Let's start with the logic. The logic is wildly convincing, now I will prove it to you, see:


Think for half a minute. If you think that there is not enough data, I can understand you perfectly. I did this trick several times with different audiences, often there are people who think that there is not enough data.

Here is what the poll on Habré showed in the previous part of the article:

poll


This is actually not the case; you were mistaken. Do not be offended, I’ll prove to you now, it’s not difficult.

Svetlana can be either married or unmarried. There are only two options. If Svetlana, suppose, is not married, but Maria is looking at her, it turns out that someone who is married is looking at someone who is not married.

If Svetlana, on the contrary, is married, and she looks at Irina, then it turns out that someone in the marriage looks at the one who is not married. We must conclude that the correct answer is yes.

If I don’t tell you the correct answer, you as a group will be in one space, and I will leave you, then after some time people who are right will convince people who are wrong.


This experiment has been done many times, it works, but there is a very important caveat. You are not emotionally invested in this story, for you these are some strangers, and in general, you are ready to eventually agree with the logic.

Logic works well, convinces well if people do not have strong preliminary beliefs.


There are several problems with logic; these are logical errors that people make.


In the presentation at TED No. 1, maybe you saw it, “Schools kill creativity”, there is the logic of reasoning that all children are creative, Petya is a child, then Petya is creative. This is a deductive conclusion. It must be 100% reliable, there can be no other situations, but then it turned out that you should always start with the definitions.

What is creativity? It turns out that creativity is the ability to have ideas that have value; inside the speech, the author himself gives this definition. That is, we have nothing to argue about here.

By this criterion, children are not at all creative. Because we can look at the number of patents registered for children, and we can find that there are very, very few of them.

Basically, ideas born in the head of children do not have any value. In the framework of this definition of creativity, children are not creative, and it means that it is not a fact that Petya is creative. This is a false message when we draw conclusions based on false assumptions.

People often make deductive reasoning. (Deduction is from general to particular, we have a generalization, then we make some kind of inference on the basis of this).

Here is another example of a deductive error:


What do critical thinking tests do? They force you to complete the missing, on the basis of what premises this conclusion was made.

And this conclusion is made from the premise that everything unnatural is bad, and that it is absolutely wrong, cities are unnatural, dams are unnatural. For beavers, dams are natural, but not for humans. Nevertheless, cities and dams are good, for me it is good.

And the third option, which may be inductive errors. This is when we accumulate a lot of options, and then we make a global conclusion.


After some time, we can conclude that all people are right-handed, which is completely wrong, there are a huge number of lefties and ambidextras. It is a typical mistake that people based on some limited sample make global, far-reaching conclusions. Classic mistake.

Surely, psychotherapists say that now there is a pandemic of depression. Of course, the therapist will have a pandemic of depression, because this is the person to whom people with depression come.


Good question.

In science, for example, in clinical pharmacology, a similar hierarchy of evidence. At the very top are research studies, that is, a generalization of a large number of experiments. Logically, we understand that the larger the sample, the more reliable the result. One study is complete nonsense, and when you have a huge amount of material, an array of literature, we can conclude that it is quite reliable. Of course, what is published is generalized.


Meta-analysis is one of the most reliable sources.

Then comes the (classic) randomized, placebo-controlled trial. This is when we have a control group and an intervention group.

We do something with the control group, and with the intervention group we do what we really want to do. You can’t do anything with the control group, because people are upset. We are doing an imaginary therapy with one group, and real therapy with the other. Then compare the results.

And all sorts of cohort studies, cross-sectional studies, when we observe cohorts of people for a long time. For example, consider all vegetarians or all plumbers. Then we conclude that plumbers are prone to this or that. Not very reliable, low assessment of the reliability of the study.

And at the very bottom is an isolated case, on the basis of one case we can only conclude that this is what happened. The black swan happened, a unique event happened, but on the whole, of course, we cannot draw large conclusions based on an isolated case.

So, approximately, the hierarchy of evidence works. Believe in large studies, do not believe in special cases. Of course, our elephant works quite the opposite, believes in special cases, but does not believe in statistics.


When analyzing critical thinking, you will find that people make a huge number of logical errors, because the elephant interferes with them.

The survivor’s mistake is when, suppose that smoking kills 99% of people, and 1% survives, will walk and tell everyone that I smoked nothing to me. Of course, this is a mistake, there are a lot of such examples.

Is it useful to know them, I do not know. Reading about it is fun. Does it help itself? I have big doubts.

I have seen some experiments to combat cognitive distortion. People were taught and then watched whether they make better decisions. The results are mixed.

As a person in the world of presentations, I am interested in logical distortions made intentionally.


Transition to personality, thatched person. This is when people form a caricature of an opponent and say that all conservatives want to ban abortion, although this is completely not the case.

Let's sort something out. Regarding Kipling, read his White Man's Burden. Kipling was a racist and imperialist. If you like Kipling, then you're racist.


This is logical, in general, but not correct. This is the fault of association, this is a poor inductive reasoning. Based on just one work, we conclude about global representations of the individual. Perhaps he regretted writing this poem. Then we transfer (analogy is a form of induction) Kipling's properties to the properties of a completely different person.

Very weak logical reasoning. Yes, people who like Kipling may be racists and imperialists, but such people are likely to be a minority.


Is this a logical rationale?

This is also a weak induction, we have some kind of generalization about boxers, we know that boxers are hit on the head and, probably, if we drive the boxers out of the IQ tests, we won’t get the results like the Nobel laureates. This is stereotyping, there is a fraction of the reasonable, but this is not a reliable conclusion.


Is this a good reasoning? It is also unlikely. Robbing retirees and charging utility bills is not the same thing. Probably, part of the housing and communal services payments is made by pensioners and, probably, some of them are misinformed, but this is a very small space of the population. This is hardly a reasonable argument. The vast majority of people are more likely glad that they have the opportunity to pay for the finished service.


We know that there is such a thing as a false dilemma. But here, I am afraid that this is a true dilemma, it is really so. There are no other options. At least I could not find. I don’t think you can do something to drink 6 cans of beer a day and not earn cirrhosis. You need to have a uniquely good liver to prevent this from happening. Most likely this will happen.


The logic is very convincing, but if people do not have ready-made emotional investments in the solution.

The question is always "what is the strength of the argument?" Those. How valid are these inductive generalizations? As far as the base, the sample that we have, allows us to make such generalizations about boxers, suppose.

And are the premises and hidden assumptions true? Basically, critical thinking tests just analyze it. How many times have I been through there, they are taught to highlight hidden assumptions. You can go through them, just to practice.

I can recommend this book: "Harry Potter and the methods of rational thinking." I’m not sure that you can find it in print, but it is definitely in electronic form.


Look, she’s very cool. This is a fan fiction, where Harry Potter gets to at the beginning of his career not to the vile Dursleys, but to scientists who teach him critical rational thinking. He arrives at Hogwarts and begins to anneal there. Very interesting book.


Maybe you heard, maybe not, there was such a bike about the American space program.


In space, there is a problem that there is no gravity, so the traditional ballpoint pen does not write in space. Because the ink that flows out flows out under the influence of gravity.

And therefore, allegedly, some hellish millions of dollars were invested in America and they created a pen that writes in zero gravity. And they distributed it to all the astronauts, and they wrote with this pen.

At that time, Russian, Soviet then-cosmonauts used a pencil on the Soviet space program.


This is utter nonsense. Firstly, even from the photograph it can be seen that this is a real pen, it really exists. True, it was made by a private corporation for some of its money, i.e. who cares, it's not taxpayer money.

But the main thing is that the Soviet space program bought these pens from the Americans, because writing in pencil is not a good idea. The pencil crumbles, carbon dust appears, conductive, it begins in zero gravity to unpredictably fly and settle on the equipment. In general, writing in pencil in zero gravity is not a good idea.

I found out about it just by googling. I typed “space pen”, enter, and pretty quickly ended up on a site called snopes.com. I can recommend it, it is one of the best fact-checking sites in the world. And there is a detailed analysis of what is true in this and what is not. And many stories, you guess, are partly true, partly untruthful.


I highly recommend checking stories before telling them to someone, especially during a public speech. The accuracy of the stories is always in question. Stories unpredictably mutate in retelling, people forget some fragments, insert something that was not there, because they really want it to be. In general, the survival of a story is its ability to produce a wow effect. The more she produces a wow effect, the more people retell it.

In the limit, any story mutates into some kind of urban legend about some huge rats in the sewers, crocodiles live there. People like to retell it. Apparently - this is some kind of essence of history, as a genre. They work like that, and there's nothing to be done about it.


I want to say that stories perfectly motivate people. I have already given you the results of a meta-review. They greatly influence people's attitudes. People love to listen to stories, and stories motivate them. But as proof, of course, this is complete nonsense.


In the hierarchy of evidence, history is somewhere at the very bottom. History is a clinical case. This is the same story, but it is framed correctly. A clinical case is a story in which everything is verified. It is recorded, there is a source, and when retelling it is possible to check with it.

Most stories, of course, are just stories that are retold.

Stories must be used in order to explain something to people, as illustrations, analogies. Stories are often used to report a critical incident.


Those. The first time a coronavirus has been reported, this is a clinical case. It was not some kind of gigantic study. Critical incident. Occurred. Or the same ebola.

Pilot project, here is the place of history. For motivation, I already said. Stories motivate, although the evidence is so-so.

Case studies work in some practical areas. Lawyers are taught to talk from cases, in business they often use the case method, and in medicine too. But, again, these are some isolated cases.


Learning to reason is good in stories. You make out the story, and it becomes clear where people made mistakes.

Please, if you are telling stories from the scene, check the details. I would recommend sitting down almost every time and re-reading how everything really was there.

Memory fails very much, memory really does not work well. Each time you retell, it is rewritten there in a new way, and many times I found myself retelling something that no longer has anything to do with reality.


I will quote Karl Sagan, there was such a great American astronomer and popularizer of science:


Let's apply critical thinking to the same phrase, in fact. What's wrong with her? And the first thing we can say is the opinion of authority. That is, we have the paradox of a liar, we should not trust him, because this is the opinion of authority.

And the second problem with this phrase, who do you think Karl Sagan addressed? Who was the target audience of this phrase, ordinary people or other scientists? And the answer is other scientists. Scientists should not trust the opinions of other scientists.

As ordinary people or scientists in other fields, we often have no other choice. We can’t say "but I don’t believe that they opened the Higgs boson, I’ll go into the yard, run my own collider there and check on it." It doesn’t work like that.

There are people who have colliders, petri dishes, microscopes and everything else. They are more likely than instagram bloggers to know how to. Such a fact of life, nothing can be done about it.


The problem with the quote is that, firstly, this is the opinion of authority, and secondly, that it is addressed to scientists. If you are not an expert, you will have to use the opinions of experts, you will not go anywhere. You yourself will not understand everything, no life is enough for this.


An expert must be, firstly, in his field of expertise, and secondly, he must say something that follows from the studies that he has carried out.

An expert can also summarize his experience, clinical, practical, managerial, etc.


There is much less trust in this. Because this is not an experiment, it is a collection of stories, stories, i.e. he had some situations in his life, he generalized them, this is extremely unreliable.

Hammer and nails, if you have some kind of tool in your hand, you start to use it everywhere, and in some cases it works, it means that it works, for example, in 12% of cases, this does not mean that it works always, this does not mean that it is at least somewhat reliable. You just got lucky. We have already sorted out the “survivor's mistake”; the sampling effect is similar to it.

What other problems with quotes?


Firstly, the biggest problem is that people misrepresent quotes. It just happens continuously, I've caught myself so many times. The second is that people quote experts outside the scope of the examination, i.e. astronomer, but they quote about the coronavirus, but they do not understand this, i.e. he is a respected scientist in his field, but this is not astronomy. And third, they use quotes from Nobel laureates on some general philosophical topics, where no one understands at all.

Is the quote accurate?

A couple of weeks ago, I personally caught myself on the fact that I quoted Frederick Taylor, this is the founder of scientific management.


It is highlighted in bold "forced communication". I had a topic about communication, and I used it. There in the original was "cooperation."

I absolutely do not remember where I got this quote. Either there was a mistake in the translation, or I translated and made a mistake, because it was so convenient for me, but this happens.

Please check your quotes from the source. The translation suffers a lot, in addition, people tend to misinterpret because the error is in confirmation.


There is such a site called quoteinvestigator.com , I can recommend it, it often turns out that you are quoting a quote from Einstein, but in fact, it turns out that this is a quote from a completely different, unknown person who does not want to quote at all.

Secondly, a very important question is whether the application is in the field of expertise of the expert.

I am 44 and I found Dr. Linus Pauling's Vitamin C ad. This is one of the first television commercials that was on Soviet television. There was such a Linus Pauling, a Nobel laureate in chemistry, and the second Nobel Prize received, in my opinion, the Peace Prize. Not important. In chemistry, he received an undivided award. Imagine, he received it alone. Most Nobel Prizes get in teams. Very cool scientist, very cool chemist.


He lost his mind in his old age and said that vitamin C is a cure for all diseases, prolongs life, and to prove it, he lived for a long time, did not die, as if confirming with his own example that vitamin C works.

In reality, vitamin C does not prolong life at all, in large doses it is harmful. It is very important to remember that an expert in chemistry is not an expert in medicine. He was an absolutely brilliant chemist, but absolutely no physician.


Someone Jeremy Rifkin. Who it? Economist and social theorist. It is doubtful.


An open letter signed by 154 Nobel laureates. A slightly more reliable source, yes, but the critically minded Fry should ask, “What are these Nobel laureates? What areas are they from? ”


I went to the site, looked, most of them are in medicine, there are in chemistry, and very few Nobel laureates of the world signed this, for some reason. I do not know why. It looks pretty reliable, it's not bad, let’s say so.


Wikipedia is a good source of scientific consensus, no matter how ridiculous it sounds, but mostly English. Precisely because it is easy to edit. There, it’s very difficult for the crap edits to survive because they are being bombarded from all sides. Wikipedia shows statistically very good results.

I was interested in one topic here, sorry, this is history, of course, not proof. I first went to Wikipedia and found a good balanced article there, and then I went to the Encyclopedia Britannica and found there a very strong skew on one side. I agreed with this side, but the article is very one-sided.

But there is evidence, you can see. There are people who analyzed the accuracy of Wikipedia and, say, the Encyclopedia Britannica, it turns out that Wikipedia is no less reliable.


Here we must ask ourselves the question: “Who and how measured this consensus?” Very important question.


There have been several studies, they show different results, but at least 91%. 91% is very good.

In general, of course, science should not strive for consensus. We do not understand climatology and there is no hope that we will understand, it will take about 10 years, therefore, most likely, it will be reasonable for us to trust climatologists.

And the last question is “Does the field of expertise exist in principle?” For example, here is a quote from Einstein. I will say right away that it is false. He never said that, but that is not the point. The fact is, what is it all about? This is not subject to examination. This is some kind of abstract moral statement. Anyone can say anything.


Or, here, too, an absolutely false quote. This is just a sampling effect.


We will look at a huge number of millionaires and always find there very strange people who do very strange things. This does not mean that everyone can do that. It may well be that Bill Gates really eats 2 bananas a day, why not.


And the most important thing that u have.


Herbert Wells once said that if we want to get citizens who make informed decisions in elections, we must educate everyone in terms of statistics.


My story is related to statistics: I once thought that the nature of a person, his predisposition does not depend on genetics. Then I read a very good book, very balanced, it’s already about 20. I recently read its latest review - as before, most of what is in it is true.


The question was asked there: we have aggressive parents, they get aggressive children. The dominant point of view - upbringing makes people aggressive. Aggressive parents treat children so that they also become aggressive. Could it be for some reason else?

Judith Rich Harris, the author of the book, asked 2 questions:
  1. Could this be due to school?
  2. Could this be due to genetics?



Maybe aggressive parents live in areas where there is a “war” around and that’s why they get aggressive children? Maybe aggressive parents genetically pass on their aggression to their children?


Approximately 49% is heritability of character. Foster children are completely different from foster parents. You take the adoptive parent and the random person on the street - exactly the same correlation. No difference at all. Parenting has a fairly small effect on character (from 5 to 15%). Mainly influenced by school and genetics. It's sad, but the good news is that, 49% is still less than 50%, most of it is in our hands.


Statistics work very well against story gatherers, couch theorists and people with experience. I try to google everything, Google Scholar is my great friend.



Suppose I need to find out if humor affects the credibility of a presentation. People often ask me questions about humor during trainings. Is humor useful, harmful, does humor keep attention? And the common question is, how does this affect persuasiveness, does humor convince?


I go to Google Scholar, type “humor persuasion”, and add the word “meta” because I'm interested in meta-reviews. Literally on the second line, I see a completely fresh meta-analysis of 2018, this is just a dream. It is in the public domain, free of charge. I click on it, read abstract, and, in principle, I may not even read the whole article.


In abstract, what is happening is clearly visible. Humor has a weak effect on persuasiveness - 0.13. 0.13 must be squared, it will turn out very little, i.e. almost no effect on humor persuasiveness. You can forget about it, as a way to convince someone.



Then I can scroll to the graph and see that there is an inverted curve, i.e. a little humor is good, a lot of humor is bad, very little humor is also bad. Those. a little bit of humor, a little bit of influence, this is my result. Very convenient, pretty quickly I got an answer to my question.


Recently, 2 years ago, I took a course at Carnegie Mellon University. Online evidence-based management and consulting course. They were taught to evaluate the likelihood that the study is correct, and there was such a sign that systematic studies, the level of confidence is A +, 95%. And qualitative research is 55%, at the very bottom.


Those. in general, it’s a good idea to assign some ratings to your level of trust, and the meta-study gets a very, very high level of trust.

Measure the level of your confidence. Without reasonable doubts - 95%, reasonable suspicion - 20%. Those. looking at some kind of scientific research, just weigh it on the “scales”. We do not need absolute certainty, critical thinking is that you are not 100% convinced of anything.


Jeff Bezos is the CEO of Amazon, and one of the richest people in the world. Even if you manage a large company, you also do not make a decision only when you are 100% sure. 70% confidence is enough, and it seems to me that this is a very important feature of people, such a tolerance for ambiguity, and the ability to make decisions in understatement.


All the information that comes in falls either on one side of the scales or on another. It either confirms what we have or refutes it. And at some point the cup outweighs in one direction, and at some point in the other.

Let's play a little game to explore this process. Look, I have a girlfriend, guess her height.

You now need to name the most probable. And, perhaps, you know that the average height of women in our country, well, or you can guess, is 1.65 m. Your best bet is 1.65 m.

I will give you information now, she plays basketball. And at this moment you should update your answer, and say "aah, then, probably, it is higher than 1.65."

About the same thing we should do with all the evidence. Proof comes to us, we must drop it on the scales. This suggests that we are not locked in binary yes / no positions, we are constantly in this Bayesian update process.

One more example:


Question: is this evidence that Sergei’s financial intuition really exists?

And the answer, oddly enough, yes. This is not very good evidence, it is not very strong, but we must take it and carefully put it on the scales. Those. it’s not about what intuition really is. There may be a million other reasons why Sergei just guessed, by chance he was lucky. We need to look further, based on one episode, you can not draw great conclusions. But, this is some evidence, and we must take it into account. Statistical thinking, it's about that.

Some important questions that in my opinion it makes sense to ask yourself when you are talking with people about scientific research, about statistics.


I do not propose a thorough understanding of statistics. I am an economist by education, I used to have two semesters of statistics, and I remember almost none of this, I remember only how to calculate the correlation. But I propose to turn to reliable sources where there is supervision over this.

And the first of course, what is a reliable source.


Do not retell retelling. If you see the result of some research in the popular press, and you want to retell it to someone, before retelling, find the original, see what is actually written there.


Journalists unfortunately often misinterpret. The journalist is also a man, he may be mistaken, he has his own dogmas, he confirms them. Look for the source, this is very very important.

The second important issue is research sponsorship.


There is such software in the presentations called Prezi. There is a study done at Harvard, a good place. Which says Prezi is better, more convincing.


The article above says “Correction,” some kind of correction. You look at this correction, and it turns out that the sponsor of the study is Prezi. The sponsor of the study, which proves that Prezi is good, is Prezi.




Not surprising. This does not mean that the study is completely crap, you need to look. Harvard, as before, is not bad, one of the authors of the studies is Stephen Kosslin. He is a very respected person in neurophysiology, and he has a hobby, he explores presentations. Those. maybe it's true. I looked, probably, on Skype Prezi is good to use.


Search in a newer way. Especially in the social sciences, in social psychology, there has recently been a very large crisis of reproducibility, and even meta-analysis cannot be trusted there. There you need to look for pre-registered multilaboratory replication, when many laboratories simultaneously put the same experiment on a large sample, and then publish the results independently. Such studies have higher confidence than meta-analysis, simply because the meta-analysis only analyzes what is published.

So we got to Albert Mergabyan and 93% of non-verbal communication.


Google Albert Mehrabian, the numbers are googled well. You will find that a very old study, very poorly done, but by the standards of the time, this is the standard of the 50s, social psychology has changed a lot lately.


If you see only one study, it is very convenient, because you say: "but the scientists have proved." But look for some meta-analyzes when analyzing an array of articles on a topic, look for thematic reviews, look through arrays of articles.


If you see polls in studies, ask yourself if this survey will be reliable. Surveys are often unreliable, especially in marketing research.


Look for studies where they looked at people's behavior, and not at what they say, it is more reliable, more reliable.

There is such an anecdotal example when people are asked how many hours you watch TV. They speak 7 hours a week , and the Nielson box, which records the “teleview”, shows that 5 hours a day , can you imagine?

But neither one nor the other is true, in fact. Because Nielson only registers that the TV is turned on, he does not register what the TV is actually watching. For most people, the TV works like a radio, they come home, turn it on and it works in the background. And they say in polls that we are not watching it. But this does not mean that the survey is correct. The truth is somewhere in between, we must try to find it.


They often say “significant effect”, meaning statistically significant, i.e. this is actually a very small effect. If you see the number R, it needs to be squared. Those. there R is usually less than one, you squared, it turns out even less. This is the percentage of explained discrepancies. Those. 25% of work results are attributed to critical thinking. Not 100%, not 98%, but not 7%. How did we get 25%? We squared 0.5.


Please look for reliable research. More than 100 people in the sample are normal, 25 are bad. It is very easy to get random noise.

There is such a simulator by reference , where you can guess by a number of parameters whether the study is reproduced or not, how reliable it is.


For example, this is a study of whether people can guess from the position of a tennis player, from non-verbal communication, to understand what emotions he experiences without seeing his face.

Here p is the probability that this result will be obtained randomly, very small 0.0000000015. The effect size is 0.961. A very big effect, but in the group there were only 15 people. Here on 15 people do you think this will be reproduced or not? It is reproduced here, because prohibitively small p and prohibitively large effect. If there wasn’t such a small p and not such a big effect, then everything would be different.


Evaluate the quality of the publication: magazine reputation, sponsorship, impact factor. Impact factor - how many times the articles from this journal are quoted, it is dynamically recounted. If the magazine impact factor 4 is a very good magazine, 2.5 is a good one. Sponsorship, fresh research, meta-reviews, the reliability of the survey, the size of the effect, the number of participants.


Errors still occur, i.e. nothing needs to be believed, nothing is 100% reliable. Measure the level of your confidence. I suggest you borrow a scale from American lawyers, I use it, how many percent are you sure.


Statistics is the best evidence, expert opinions are abbreviated statistics, stories are isolated cases, the logic for everything else is pretty reliable, but, of course, reasoning while sitting in a chair can also be very wrong.

I can highly recommend this course , it is in English, there is no video, there is only text. He is very good, I passed it with great pleasure. Evidence-Based Practice
in Management and Consulting - evidence-based management and consulting. It may interest you if you are working in a business, if you are a manager or consultant.

And finally, the last - alternatives. In general, we have already partially talked about the alternative, this is the question: why else can it be? There is some statement, we see some evidence for it, and we must ask ourselves: are there any other options? Let me give you an example, there is such a question:


What do you think is useful? Someone thinks that is useful, someone thinks is harmful.

I recently saw the results of a study, this is a meta-review of good quality, in my subjective opinion. Most studies, especially high-quality studies, showed that those who avoided eating meat had significantly higher risks of depression, anxiety, and / or self-harm.

I translate into human language: people who do not eat meat are more likely to suffer from depression. The question arises: how often? I already forgot, but there is a significant, statistically significant, not very large percentage, 10-20%, but 10-20% is still quite a lot.

Critical thinking of the first order is when we tell ourselves, look, in general, research on nutrition is very unreliable, directly very unreliable. If we begin to generalize unreliable research, we get a not very reliable result. The reliability of the meta-analysis based on cohort studies is low, but it is still higher than the reliability of the opinion of the installer blogger. True, here it is necessary to understand that what is imperfect, but it is better than an alternative. This is very important.

Second-order thinking is when you think, or maybe people who are more prone to depression, who are more neurotic, who are more likely to worry about their health go to vegetarians, maybe they have a predisposition? It is important to think about it.


Critical thinking of the third order is when you realize that both of these conditions can be fulfilled simultaneously. Those. it may be that neurotic people go to vegetarians, but it may be that the meat also somehow influences. Both that, and another at the same time works. 5% of one, 5% of the other - in total 10% is obtained. Why not? It is possible.


Last, and perhaps most importantly, is the ability to distinguish between the fact that if meat is healthy, it does not mean that meat is ethical. And vice versa. I can have any opinions about the humanity or inhumanity of meat. How much do I keep pigs in narrow cages, is it generally permissible for me as a person? Anything I can think about it, but this does not say anything about the quality of research. It is possible that vegetarians are still prone to depression, there's nothing to be done.

We cannot know everything, but it is absolutely in our power to doubt and google. I guess that you will google only if you need to defend your opinion. Before you go on stage, google, please, it helps a lot. There is no need to trust anything 100%, but it makes sense to consider moderately reliable information.

I am becoming more and more convinced that a person who has been speaking for a very long time and confidently on the stage simply does not understand what he is talking about. Categoricality is a sign of unprofessionalism.

Of course, it would be very good to seek not only confirmation, but also refutations for your opinions. It would be good to understand and understand not only how critical thinking is useful, how it is attractive, but also how harmful it is. It is useful and harmful, these things do not exclude each other, so it can be useful and harmful at the same time.


I urge you to develop critical thinking in yourself, but it only works if you surround yourself with people for whom critical thinking is important.


Develop critical thinking in your interlocutors. You quarrel with someone, I'm afraid this is inevitable, but in general, the criticality of your thinking depends on your social environment. Work in teams, people in teams think critically, this is the most basic thing I wanted to tell you.


All Articles