all 18 comments

[–]zyxzevn[S] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

Also very important:
Group-think

I tried to capture most of this with logcal fallacies.
But because it is so common, I think it is good to look at it separately.

From the article

These are the main behaviors to watch out for:

  1. Illusions of invulnerability lead members of the group to be overly optimistic and engage in risk-taking.

  2. Unquestioned beliefs lead members to ignore possible moral problems and ignore the consequences of individual and group actions.

  3. Rationalising prevents members from reconsidering their beliefs and causes them to ignore warning signs.

  4. Stereotyping leads members of the in-group to ignore or even demonise out-group members who may oppose or challenge the group’s ideas.

  5. Self-censorship causes people who might have doubts to hide their fears or misgivings.

  6. “Mindguards” act as self-appointed censors to hide problematic information from the group.

  7. Illusions of unanimity lead members to believe that everyone is in agreement and feels the same way.

  8. Direct pressure to conform is often placed on members who pose questions, and those who question the group are often seen as disloyal or traitorous.

  9. (addition) Tendency to swing from the status quo to the complete opposite.

  10. (addition) They become control freaks (and lack critical thinking).

[–]CompleteDoubterII 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (3 children)

Another thing I would recommend is posting your views on forums, and asking for people to debunk it and provide further evidence for it. One probably would find evidence for and against their view that he/she wouldn't have thought of by oneself.

[–]zyxzevn[S] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (2 children)

It will certainly work for small stuff.

But psychologically, it is very hard for people to change their views.
Especially when there is some fear or consequences involved.
There are some good examples of that given by Scott Adams on his podcasts.
In the latest he explains how the people do not accept the evidence in the court-documents, but rather go protesting based on fake news.

So instead, I invite people to explain the situation in a way that the story has almost no opinions. 1. No logical fallacies. 2. Evidence based. etc.

This can improve the understanding of the full situations and the different ways to view it. So a person can say: "There is good evidence for this, but I believe that idea."

It may sound contra dictionary, but the person can then:
(1) improve the evidence for his idea,
or (2) accept the idea with the best evidence.
or (3) keep his idea, but understand that others may not support it,
or (4) look at another idea.

And at no point the person will feel attacked, when people add more evidence or reasons to support a different idea.

I hope this improves the communication between people that have completely opposing viewpoints.
They can even support each other in building a consistent and valid theory around their ideas.

[–]CompleteDoubterII 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (1 child)

I was talking about investigating the truth with no stake in where the evidence lies. I think people would be able to change their minds based on the evidence in that scenario. That being said, you are completely right in saying that it is hard for people to change their views when they have a stake in the truth. This is especially apparent in the case of social conformity. I will assume one is aware of the Asch Conformity Experiment. Gregory Berns took a variation of the experiments where he measured the participant's brain activity during a task (of rotating 3D objects). His results showed that the occipital–parietal network were used when answering incorrectly, meaning social conformity overwrote their reality. This is especially important since the results were physically verifiable. Imagine the implications for matters of opinion and claims that aren't physically verifiable (hat tip StormCloudsGathering).

Your idea of inviting people to explain the situation without any opinions may be a good way for people without strong stakes in what the truth is, but I doubt those with strong investment in a particular claim being true would be able to do that.

[–]zyxzevn[S] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

Great points..
Maybe I should add a psychology& bias section in the list.

but I doubt those with strong investment in a particular claim being true would be able to do that

You are correct on that.

I chat with astronomers about clear problems in their field.
One huge problem is "magnetic reconnection" which completely breaks with all basic physics and with observations. But there are many others.

But instead of seeing the problem as it is, they just refer to someone else.
The "expert" becomes the "holy prophet".
They just copy the belief that someone else had.
Even if it clearly gets falsified in an experiment.
And bad maths and oversimplification becomes a way to proof impossible things.

The idea of inviting people to investigate theories according to logic and basic science,
is to see the quality of evidence for certain theories.
Even if they prefer other theories.
It can give you better understanding how other people may think, and how well received your theory might be.

My experiences..

I learned a lot from Architects and Engineers for 9/11 truth.
In the beginning I completely believed the idea that the towers came down due to the airplanes.
But with basic physics ans basic logic they make a very convincing case how the towers came down by demolition.
And while I did not immediately believe them, I could accept the quality of their theory.

So when something happens, I want to invite people to become an investigator.
Like a crime investigator.
And look for evidence and use logic like a crime investigator would do.
In this I am copying the method that the Architects and Engineers use.

Logical fallacies:

And the first thing on my list are the logical fallacies.
That is because the first news and first opinions that you find will contain a huge amount of logical fallacies.
You need to remove the logical fallacies before you can use any of that information.

But the logical fallacies can also show you towards what opinions you are pushed.
This can reveal the underlying agenda or narrative that the news pushes.
Or even expose some propaganda.
And if it is too revealing, it may even be reverse psychology.
(The news makes you resist more, and it is meant to work that way).
In some cases there are planted or staged stories/events to make the narrative seem even more true.
Like the group of "russian bots" that were from an agency paid by the democrats,
to make the people think that they were targeted by russia.

[–]dawnbailey1974 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

Thank you for sharing. It was interesting to read. I agree with your point of view, and I've also read a similar opinion at https://www.the-essays.com/ . Your article can be used as a quide for people working on the topic.

[–]zyxzevn[S] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

Additional problem related to "Trust"
Who do you Trust, about what, and how much?
And who do you not trust, and who do you think may be manipulating the data?

In the discussion about Climate change a lot is about Trust.

A fair way is to TRUST NO-ONE.

While you can listen to different sides of a discussion, you need to assume that both sides are making some over-simplifcations, proejctions are and stuck in group-think.

And the evidence that they bring forward needs to be verified via different ways.
Does the evidence still look consistent when combined with different sources?
Is there a manipulation of the data in some way?
With commercial or political projects this can be expected.

So more important than the data itself is:
Can we verify the data?

If it is about a scientific model we can have other criteria: Like "Prediction".
Do the predictions made by the scientists in the past match the reality that is now?
And if it is not, we can simply that the scientists were not correct,
and are not correct now if they are using the same model or way of thinking.

Other scientific criteria are: repeatability, consistency, transperancy, etc.

But if we use these criteria we will see that many scientists are falling short or even failing.
And this is why they often defend their hypotheses/theories with a curtain of fallacies to make people believe in them.

And this is why I put the fallacies first:
They already indicate that something is not correct with the whole way of thinking.
But we to find out the truth, we still need to investigate what logical or scientific problems they are trying to hide with the fallacies.

[–]zyxzevn[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (1 child)

There is a lot to add about science, because there are also techniques to corrupt science, which are often used.

  1. Experts, who have big titles, but are actually doing the bidding of their sponsors. Or are just stating opinions instead of facts.

  2. Experst2. When you are trained a lot with a hammer, everything becomes a nail.

  3. Experts3. Experts in Astrology claim that Astrology is useful.

  4. Corporations use science publications as a way to advertise their product. And to hide defects. So they will write lots of reports that show how good it is. And lots of reports that carefully avoid the defects or problems. For example Monsanto designed lab experiments to last only a short time, so that the cancer would not show up. Often you can see that certain products, that require simple tests, still require a long time to complete. That is to design the tests so that they avoid all the problems.

  5. Peer review. The peers are often connected to corporations. Or they are insiders, that do not want the field to change. Due to their extreme bias, they are also not able to see things in a different way. And this stops any real change in the field.

  6. Ghost writers. Often corporations write an article and use the name of an expert-scientist (often for money). So it seems that the article came from that scientist.

  7. Complaining about facts. When the facts are not in line with the corporation, they let scientists write (or sign) articles that are claiming that the facts are bad science. Like the hot and cold cycles of the climate.

  8. Diversion to false theories. So instead of dealing with the facts of the cycles in the climate, the scientists complain about people not listening to them. They even create false theories to claim that the people looking at the facts are crazy. Or that the cycles are due to some hidden ocean cycle, which is easily disproven.

  9. Diversion to hyped solutions. Instead of dealing with the science, the scientists pretend it is settled. So they refuse to talk about the facts. Instead they talk about hyped solutions, that usually do not even work, if their fantasy was correct. So people have to pay energy-taxes, while special big corporations get a free pass.

  10. Diversion to futurism. Instead of dealing with the science of the real problem, we get solutions that we do not have the technology for. Like nuclear fusion. Or GMO-mosquitoes. Or a base on Mars. Or a vaccine/medicine that has never showed to work yet. These diversions make us forget about the real problem that we are facing now.

  11. Patents. Patents and other forms of intellectual are always a problem. But they are presented as a solution, because they create a monopoly for the corporation. And monopolies lead to maximum possible profits and extortion.

I will update this regularly.

[–]zyxzevn[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

More on corrupt science:

  1. People write in popular magazines instead of writing actual science. This way it prevents real experts of countering their claims.

  2. Cherry pick an single event or single detail, and make it seem extremely important. Instead it should be taken with all other evidence.

  3. Reverse faking: "If I can fake it, it must be fake". Usually omits some details which are harder to fake, or circumstances in which this fake is very unlikely. In the same line: It must be a hoax.

  4. Not discussing/countering the Null-Hypothesis. "What if there was no ......?" Without a null-hypothesis, we are jumping to conclusions.

  5. It was "researched". And after investigation actually no-one was responsible for the research. Or a single person. Everyone just assumed that a good research was made, or good science was performed.

  6. Censorship of conflicting science. No science publication can pass the peer review, if it puts severe doubt on previous made conclusions.

  7. Preferable conclusion. Either due to politics, finance, prejudice, or even because "it looks nice", the scientists come to conclusions without any real scientific evidence. A lot in theoretical physics was accepted, because it was mathematically beautiful.

  8. Maths as evidence. Or a simulation as evidence. If you make a mathematical model, and the data fits the model. Does that make the model undeniably correct? No of course not. Even if it is very precise. It might just be correct just for this tested system and for certain circumstances. Sometimes a model has removed the influence of other factors (like noise), but due to the errors in the model, the test seems perfect. In a simulation this can even go further, because a simulation has even more simplifications and systematic errors. Maths or simulations should not be considered as evidence, without properly testing the alternatives and thoroughly investigating the limits of the model/simulation.

[–]zyxzevn[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Astroturf and manipulation of media messages | Sharyl Attkisson

In this eye-opening talk, veteran investigative journalist Sharyl Attkisson shows how astroturf, or fake grassroots movements funded by political, corporate, or other special interests very effectively manipulate and distort media messages.

[–]zyxzevn[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

See also the information listed by
James Corbett - A message to new "Conspiracy Theorists"

[–]zyxzevn[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Related: Models in Science - CCC lecture

Models, as well as the explanations and predictions they produce, are on everyone's minds these days, due to the climate crisis and the Corona pandemic. But how do these models work? How do they relate to experiments and data? Why and how can we trust them and what are their limitations? As part of the omega tau podcast, I have asked these questions of dozens of scientists and engineers. Using examples from medicine, meteorology and climate science, experimental physics and engineering, this talk explains important properties of scientific models, as well as approaches to assess their relevance, correctness and limitations.

For more than twelve years I have been interviewing scientists and engineers for my podcast omega tau. In many of the conversations, the pivotal importance of models for science and engineering becomes clear. Due to the pandemic and the climate crisis, the meaningfulness, correctness and reliability of models and their predictions is ever present in the media. And because most of us don't have a lot of experience with building and using models, all we can do is to "believe". This is unsatisfactory. I think that, in the same way as we must become media literate to cope with the flood of (fake) news, we must also acquire a certain degree of "model literacy": we should at least understand the basics how such models are developed, what they can do, and what their limitations are.

With this talk my goal is to teach a degree of model literacy. I discuss validity ranges, analytical versus numerical models, degrees of precision, parametric abstraction, hierarchical integration of models, prediction versus explanation, validation and testing of models, parameter space exploration and sensitivity analysis, backcasting, black swans as well as agents and emergent behavior. The examples are taker from meteorology and climate science, from epidemiology, particle physics, fusion research and socio-technical systems, but also from engineering sciences, for example the control of airplanes or the or the construction of cranes.

I am far less optimistic about the models than the speaker, but he does a great overview.

I think that most models are oversimplified, the errors in the models are then corrected with hidden parameters.
So when the models do not work, the parameters are adjusted to keep the model working. But now the model makes false predictions by default, only corrected by the parameters.

Sometimes the models create the data themselves hidden in slightly wrong maths or slightly wrong data.
Or they hide problematic data that should be visible, like side-effects of drugs.
Often this is combined with some P-hacking.
For example: I think such a combined process created the "black-hole image", but that is now so locked in group-thinking, that it will be hard to change opinions about it.

I think that we need to create a better standard for the correctness of models. Not based on what we like to be correct, but based on what raw experimental data tells us.
Like: what does it mean when we get unexplained signals, or how much did our instruments and maths influence the parameters in the model.

Some standard questions:
How much noise did we accidentally convert to signal? Can we get a similar signal from a different source? Can someone have created the signal artificially? Is there a selection-process that can cause p-hacking?
Are there unexplained parts in the signal, or things that are hidden or noisy? What possible or weird errors/risks are involved? Can we test those errors/risks?

What are the edge-conditions of the model? How does the model stand up against a null-hypothesis? (the hypothesis that the model/idea is wrong). How does the model stand up against an evolving-hypothesis? (the hypothesis that the model needs to evolve further)

These questions can tell us a bit about how much we can trust a certain experiment and the tested model.

[–]StrategicTactic 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (2 children)

As you recently linked this, I am not considering this a necro.

There are many fallacies that should be avoided, and indeed the appeal to authority is one fallacy, but I do not believe you have given it a fair representation here. It is fair and valid to appeal to an authority, but it becomes a fallacy when that authority is not recognized by all. For instance, we can use the works of an author as an authoritative source on what that author means/intends, as we can all recognize that each person is the authority on themselves. We can often use some of your very examples "Scientists say..."- but only when all can recognize such as a valid authority. When the authority is in question, it then becomes a fallacy to refer to it.

For example: If I were to refer to a specific religious text in a discussion, say the New King James Bible in a discussion with Christians, it would not necessarily be an authority fallacy. Now if I refer to that same text in a religious discussion with Buddists, since they do not recognize the text as authoritative, it would be a fallacy.

Edit: One other important note is that science is never settled, and consensus is not science. Something like "The science shows/says" is only explaining some people's current understanding, not necessarily the truth of the matter or showing evidence.

[–]zyxzevn[S] 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (1 child)

Good points.

"Authorities that are recognized by all", can give a good starting point for a discussion.

If you substitute "authority that is recognized by all" as "truth", you kind of replace the authority fallacy with group bias.
So instead of the "truth" you get a "generally accepted truth". Which may still be completely wrong.
My experience is that many "experts" are also very biased. And this bias should be taken in consideration.
And indeed, science is never settled.

I usually use the same arguments for things that are clearly not universally accepted. That way a fallacy pops out quickly.
Like who is the authority on Christianity?
Is a pope the best authority for Christianity? Or historians? Or people that (think they) channel Jesus?
And does that even matter now?
Yet, in the middle ages you could get killed for having the wrong ideas.

I also notice that people misinterpret authorities to make complete weird claims.
For example, based on Einstein's relativity of gravity, gravity is equal to acceleration.
And therefore, some flat-earthers claim that the earth's surface must accelerating.
Which they claim can only happen if it is flat.

But also good scientists misinterpret authorities to make weird claims.
Alfphen warned astronomers not to misuse his formulas when receiving his Nobel prize.
Astronomers falsely removed the electric fields from the equations, which made them much simpler.
And applied them in many cases.
This gave the rise to the theory of "magnetic reconnection", where magnetic field lines
do exist and can even collied with each other. The NASA still uses this theory for the Sun.
And yet magnetic fields are continuous and do not have lines.
There is more behind that idea, but it shows how correct ideas can become completely weird.

[–]StrategicTactic 2 insightful - 1 fun2 insightful - 0 fun3 insightful - 1 fun -  (0 children)

I agree with your points here. Just wanted to clarify "recognized by all" was meaning "all of those in the conversation/debate", since as you indicated a general "all" would have group bias. Since those looking at a paper may not all recognize certain authorities, there can be some challenges there.

In general I find debating a paper or specific evidence to be rather exhausting just for those reasons, and much prefer debating a specific person on their viewpoint. Even if I do not convince them on a subject, they can pose direct challenges where they do not recognize figures that I do, and the reverse. A perfect example is in political debate where people quote Fox news and CNN to each other, neither recognizing that they have both committed the appeal to authority fallacy.

[–]zyxzevn[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

David Martin with his own scheme: Video - Activate Humanity Rise

Reconnaisance

  • Surveillance of decoys and distractions and their promotors

  • Recognition of patterns of human control

  • Inner training on using the "fear" dashboard indicator

Intelligence

  • What is source data?

  • How do I keep track downthe signal in the noise?

  • What is "real" and what is a manufactured illusion?

Synthesis

  • "Trust and Verify"

  • Perform an integral Audit of the Narrative (all actors and all funders)

  • Understand the placement of a fulcrum to achieve benefit

Engagement

  • Take action - Avoid reaction

  • Conscript the resources for the campaign

  • Remaining fit for service in personal and relationship life

[–]zyxzevn[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

The scientists are also very well capable of producing falsehoods.
Falsehoods can hide in statistics, in methods, in systematic noise, and survivor bias.

Even without fraud many scientists will often overlook:

  • the problems with their statistics,

  • the methods that they use. Like the instruments that they use. Or the calculations that they use.

  • the systematic noise caused by the way they measure.
    Or noise caused by the environment. Or noise caused by the way and timing of measurement.

  • Survivor bias. A lot of data and problems with the data will not reach the researcher. Or they will scrap it off, because the data that is not good enough. The latter happens more often than you think, because every person in the research chain can remove data. From the student, laboratory worker, local scientist, project scientist, project manager to the director. Each person will add their bias to an observation, and avoid problematic data.

  • Using words as the explanation instead of the description of an unexplained problem.
    In the middle-ages it would be the devil, or a miracle. But we do not have a good explanation of what causes gravity (in my opinion).
    We have good formulas, but what is causing gravity is often just the word "gravity".
    Or a more mystical: "The bending of space/time".
    But what would be wrong with stating: "Tendency towards depression is caused by the placebo effect".

Fraud

With fraud, this picture gets a lot worse. One or all people in the research team will now deliberately work towards a certain result outcome.
This is often standard procedure for certain political or commercial related research, where the outcome determines the distribution of a lot of money. Many people think that it is ok to bend the rules a little if it is beneficial. The amount by which they bent the rules depends on the rewards. If you do not know that this is standard procedure, you should start investigating it better.
Often some scientists will make conflicting statements, because the conclusions are in conflict with the real world.

The goal of scientific fraud is to create an certain outcome out of the data, while making it seem that the outcome is scientifically valid.
So they work the outcome that they want.

  1. Statistics. Lying with statistics is easy.
    In the medical industry the most common method is to reassign problematic cases to other categories. And then make those categories seem irrelevant.
    Like: There were 3 deaths during the testing of Cocaine, but these people were later diagnosed as addicts. So we are certain that the deaths were caused by the addiction.
    Or: During investigating the death we saw that the bullets had not caused any lead poisoning. So we can not conclude that the death was caused by the bullets.

  2. Methods and calculations.
    John measured the hammer to see if the safe was strong enough. And noticed no problems. Our new car model reduces the number of deaths by 100% (of 1 person), as we had one less fatal accident than the older model.

  3. Systematic noise / bias.
    Our product does not cause cancer. ..Because we only measured the first week after consumption.
    Our test among students showed that most people that used our medicine are very healthy
    Lying down while shooting in random directions is safe. So this is what people should do during shootings.
    Crossing the street is dangerous, we should forbid people from crossing the street.
    The satellite shows that earth is getting bigger. Because the distance between earth's surface and the satellite is getting smaller.

  4. Survivor bias.
    5 of 6 scientists confirm that Russian Roulette is safe and profitable.
    After punishing doctors for reporting problems, the safety of our medicine has improved immensely.

  5. Reverse logic.
    Drinking alcohol causes depression.

  6. Changing or misrepresenting history.
    Everyone who has no access to modern medicine, is no longer alive.
    We remove the measured historical data and replace it with our models.

[–]zyxzevn[S] 1 insightful - 1 fun1 insightful - 0 fun2 insightful - 1 fun -  (0 children)

Survival Heuristics: Avoiding Intelligence Traps
Also applies to science and news
Video A CIA analyst who has a long experience of solving problems.

These are the topics that she covers:

  • The streetlight effect
    We can only see the things that are visible.

  • Trends are always about the past

  • Graffiti
    Signals out of the ordinary gives us indication of hidden problems

  • Most things don’t happen by chance

  • Exponential causality
    Information spreads exponentially from one person to many.
    This causes info and responds to spread exponentially.

  • Worst case doesn’t mean it’s unlikely

  • The probability of something occurring is independent from the consequences that that event may have, right?

  • Don’t drone on with technical explanations and facts, use compelling stories and examples tot make the point you want to make.
    People generally do not understand logic and facts.

  • They never run out of bullets
    You can't expect the criminals to stop by themself.

  • Emotions can kill
    Example: News with lots of emotional triggering can cause violence.

  • Construct and constantly revise your analytic landscapes
    An analysis is only a temporary model to make the always changing reality easier to understand.
    In science new experiments can also break older ways we see things.

  • Know Your Thinking style

  • Cause-effect relationship: – Concrete Sequential (CS) – Abstract Random (AR) – Abstract Sequential (AS) – Concrete Random (CR)
    Cause and effect relationships can be simple or extremely complex. These differences require different analyses.
    Most people only understand simple relationships, which can be shown in simple graphs.
    Experts can only work with a bit more complicated relationships, but fail when they get too complex.
    Many case-effect relationships are hidden or may be random, and extremely hard to analyze.

  • Deploy Diversity of Thought
    Organizations that allow for a lot of different ideas, have better outcomes, even when the dissenters are wrong. Diversity of thought reduces the holes in the knowledge, in the logic, and reduces the bias.

  • Think Together from the Start

  • Respect Your Intuition
    Your intuition can see far more than your logic mind.