I know MediaBiasFactCheck is not a be-all-end-all to truth/bias in media, but I find it to be a useful resource.

It makes sense to downvote it in posts that have great discussion – let the content rise up so people can have discussions with humans, sure.

But sometimes I see it getting downvoted when it’s the only comment there. Which does nothing, unless a reader has rules that automatically hide downvoted comments (but a reader would be able to expand the comment anyways…so really no difference).

What’s the point of downvoting? My only guess is that there’s people who are salty about something it said about some source they like. Yet I don’t see anyone providing an alternative to MediaBiasFactCheck…

  • FuglyDuck@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    4 months ago

    here’s their definition of what’s a left or right bias

    It’s pretty fucking arbitrary.

    Additionally, their methodology is a bunch of gibberish and buzz words. that they explain their justification on each article is inadequate. For example, Al jazeera is dinged for using “negative emotion” words like “Deadly”.

    Deadly might invoke a certain kind of emotion. but it’s also the simplest way to describe an attack in which some one dies. Literally every news service will use “deadly attack” if people are dying, regardless if it’s an attack by terrorists, or by cackling baboons. (or indeed not even an attack. for example ‘Deadly wildfire’ or ‘deadly hurricane’.) the application of using that as an example is extremely arbitrary, on a case by case basis.

    • finley@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      2
      ·
      edit-2
      4 months ago

      Now you’re just repeating yourself. That doesn’t make it any more true.

      And as far as your claims of methodology being arbitrary, just because you use words in an arbitrary manner does not make their methodology arbitrary.

      Like I said, just because you don’t agree with them doesn’t make them wrong or you right. Feel free to block them if you don’t like it. But other users here have clearly demonstrated how your argument does not hold water.

      • FuglyDuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        Okay.

        Take their methodology.

        Work through it.

        You can’t because most of the “rigorous definitions “ aren’t shared.

        You still haven’t explained what “factually consistent” means in a method that’s repeated and able to be applied regularly.

        Their methodology as posted is far too vague to adequately consider their ability to provide consistent neutral ratings.

        How are “loaded” words evaluated? Is there a table of words that are considered “loaded”? Personal feeling? We don’t know. We know what some of them are, since they’re mentioned on specific articles.

        But that isn’t a consistent or “rigorously defined” criteria. So what is the “rigorously defined criteria”- and why is that not published?

        Do you not see how that’s ripe for abuse?

        • finley@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          2
          ·
          edit-2
          4 months ago

          I have used their methodology and worked through it. I find no fault with it.

          And finally, you’re the one who makes claims that there is some problem with their methodology, yet you have not demonstrated that at all. All you demonstrated is that you happen to disagree with it and that you don’t like it. If you wish to prove your point, you’re gonna need evidence for that, and all of your carrying on here I have not seen the shred of that.

          Just block it and move on already. Your disagreement is hardly worth this crusade.

          • FuglyDuck@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 months ago

            I have used their methodology and worked through it. I find no fault with it.

            So then, It should be simple for you to tell us what rate of error is acceptable to still qualify as “factually consistent”.

            This is like giving. Recipe without measures, or a “how to build a shed” guide without describing how to build the pad it sits on.

            And finally, you’re the one who makes claims that there is some problem with their methodology, yet you have not demonstrated that at all.

            I haven’t? Huh. Interesting. So all those “rigorously defined criteria”, those are public? We know how they’re actually evaluated?

            We know what error rate is “Factually Consistent”, we know how they treat “misleading” tags or “misrepresentation” tags in their factual rating?

            I mean in my looking for an example where they clearly do not have a consistent methodology, I found it the first place I looked. (Okay, so I knew VOA news and Al Jazeera are both state owned propaganda outlets.)

            They’re both inherently biased. Yet one is “least biased” just because its owners happen to the us gov? Oh look. Here’s a third gov-owned propaganda outlet. Gee, what makes VOa special?

            Just block it and move on already. Your disagreement is hardly worth this crusade.

            No but the open discourse here and in similar communities is. Me blocking it just hides it from me. MBFC is being used, in part, to evaluate sources for articles.

            It’s a third party, private-interest group whose methods aren’t clear and self-evidently inconsistently applied.

            Even if they were demonstrably always right… that’s a problem, because sometimes the best source/news agency to talk about a given issue sucks.

            Sometimes the discussion is about awareness of how shitty “that rag” is.

            • finley@lemm.ee
              link
              fedilink
              English
              arrow-up
              0
              arrow-down
              3
              ·
              4 months ago

              i’m not here to waste time trying to convince of you of something about which you’ve clearly made up your mind, since others have shared plenty of facts, made great arguments, and all you do is keep shifting the goalposts.

              not to mention: it’s not for me to prove your claims-- that’s on you, and you haven’t. all i have claimed is that i’m satisfied, and the only proof you need of that is my word ont he matter.

              so, once again, since you haven’t proven anything other than you disagree with it, i suggest you simply block it and move on with your life. you have no greater authority to decide what is or is not a “reliable source” than MBFC, but at least they show their work.

              • FuglyDuck@lemmy.world
                link
                fedilink
                English
                arrow-up
                3
                ·
                4 months ago

                Since others have shared plenty of facts, made great arguments, and all you do is keep shifting the goalposts.

                I shift the goalposts but am just repeating myself? interesting.

                In any case… as for my “claims” perhaps I’ve missed something. Again. From their own methodology page:

                The primary aim of our methodology is to systematically evaluate the ideological leanings and factual accuracy of media and information outlets. This is achieved through a multi-faceted approach that incorporates both quantitative metrics and qualitative assessments in accordance with our rigorously defined criteria.

                Okay. so that’s the highlevel sales pitch. emphasis mine.

                Perhaps. just perhaps, I’ve missed where they dropped what those defined criteria are. lets keep reading.

                While the concept of bias is inherently subjective and lacks a universally accepted scientific formula, our methodology employs a series of objective indicators to approximate it. We utilize a visual representation—a yellow dot on a scale—to signify the extent of bias for each evaluated source. This scale is accompanied by a “Detailed Report” section which elaborates on the source’s characteristics and the basis for its bias rating.

                Our bias assessment encompasses various dimensions, including political orientation, factual integrity, and the utilization of credible, verifiable sources. It’s crucial to note that our bias scale is calibrated to the political spectrum of the United States, which may not align with the political landscapes of other nations.

                Objective indicators? what indicators? Where? for you or me to understand how they’re arriving at their analysis, I need to understand what “objective indicators” they’re using. they’re not listed anywhere I can find. Perhaps I’ve missed it. I don’t think I have. but. perhaps I have.

                Now, Skipping down to the specific categories…

                The categories are as follows:

                • Biased Wording/Headlines- Does the source use loaded words to convey emotion to sway the reader. Do headlines match the story?
                • Factual/Sourcing- Does the source report factually and back up claims with well-sourced evidence.
                • Story Choices: Does the source report news from both sides, or do they only publish one side.
                • Political Affiliation: How strongly does the source endorse a particular political ideology? Who do the owners support or donate to?

                Alright. now we’re getting to the stuff I’m asking for! maybe. uh. shit. The just “Biased Wording/Headlines” at that. So they have no list of common loaded words, (For example, is “Deadly Wildfire” okay but “Deadly Attack” not? both are describing events in which people presumably died. What you, I or anyone else perceives as “loaded” is going to be entirely different. You want to rigorously define criteria for bias? you’re gonna have to at least provide examples. And not on the individual ratings. Protip. the lack of strong or emotional language is also an indication of bias- for examples of that, watch reports surrounding any cops that killed a subject. you’re almost certainly going to be seeing the pro-cop news agencies shy away from language that evokes anger.

                Then then get into their “comprehensive” analysis:

                For a thorough evaluation, we review a minimum of 10 headlines and 5 news stories from each source. Our methodology employs a variety of search techniques to ensure a comprehensive understanding of the source’s political affiliation and ideological leanings. This process can be time-consuming or very simple, depending on the source.

                yeah. uhm. that’s not “comprehensive”. at all. MPR news, just from today, just the ones that get highlighted, Minnesota Public Radio news has 28 articles. from today. and that’s not even bothering to look at all of the massive amounts of MPR/NPR affiliated podcasts and such being pumped out sometimes 3 times a day.

                Further, there’s no information on which articles are selected. Which can have a profound impact on whether or not they get a passing grade for factualness. If you’re only checking ten out of literal thousands of articles a year. or, even a hundred articles, out of thousands a year, how you select articles to review are going to have a profound impact. Is it random? is it by top rating? are they cherry picked? top headlines from random dates?

                And lets draw attention to that last line. “This process can be time consuming or very simple, depending on the source”. meaning… it varies based on the source. Even if there’s more to work with for a given source… the process should probably not be any more or less simple- the process should be the process. that’s the purpose of a methodology.

                Skipping the descriptions of their fact check ratings… all I’m going to say here is that there’s no objective standard for what “consistent” or “often” or any sort of miss-rate on being factual. I will submit that, for example VOA news probably should be given a low factual score based on this statement: >A “Low” rating indicates the source is often unreliable and should be fact-checked for fake news, conspiracy theories, and propaganda.
                you know, considering VOA is literally a state media outlet. whose entire purpose is to pump out propaganda; yet it’s given a ‘high’ rating. but what do I know, they certainly weren’t forbidden from broadcasting inside US boarders because of their propagandist nature.

                their critera for who they use as a factcheck service is useful:

                Our methodology incorporates findings from credible fact-checkers who are affiliated with the International Fact-Checking Network (IFCN). Only fact checks from the last five years are considered, and any corrected fact checks do not negatively impact the source’s rating.

                IFCN is good. the date restriction is good. explaining how correct fact checks affect things… is good. I would like to see a comment about which fact checkers they always use, or always use when it’s relevant (for example, reviewing a french news service using, idunno, a taiwanese fact checker seems kinda sketchy.) Do they search all 115 current signatories and the other 54 that are in the renewal process? do they search only those from the source’s home country? when do they elect to expand beyond that? do they only use one service at all?

                I’d assume they use some sort of aggregator service to look for fact checks across all of them at once. Personally, my preferred choice would be an aggregation service combining all of them, and searching for articles tagged as fact checking the specific source, rather than for each of the articles being reviewed. Then organize those by some sort of pass/mostly-pass/fail/epically-fail sort of metric. but that’s just me.

                TL:DR? my goal post has always been that their methodology is opaque and not useful to determine that their method reasonably eliminates their bias. that has never changed. they don’t describe what acceptable error rates are for factualness (never mind severity of the error. reporting a person wore a green shirt when they wore a blue shirt might be factually incorrect, but does it really matter if the story isn’t about what shirt they wore?). they don’t describe even in brief detail what ‘loaded’ or ‘biased’ headlines actually look like. They describe a literal propaganda service as being “Least Biased”.

                They cite newsguard as a competitor (i’m not sure about that, but they’re in the same space. from what I see on their website… they’re selling their service to different audiences. Like brands looking to advertise on a specific site, etc.) Lets look at their methodology page. I’m not going to go into detail. but you see how it’s broken down? how specific. each criterion is specifically listed, with reasons for it passing or failing a given criterion listed, as well as express explanations of what things mean. When you’re looking through it. not ‘we judge on bias… which means that we look for biased words…’. Like a phrase you see is ‘that a regular use would not likely see it on a daily basis’.

                Check their scoring process. They have a researcher (described as a trained journalist), research the website, make a report, then they write the article. that article is then put on pause for comment from the company in question … then it is reviewed by a people (“at least one senior ediitor and Co-CEO”…) to check for factual accuracy and what have you. Only then is it published. I assume that MBFC has something similar, but that’s an assumption. no where does it describe the editorial process. for all we know, it really is just one guy in a cat suit working the one article, doing it his way while the lady in the dog suit is doing it her way and the editorial staff are in a two-person horse suit searching for organic oats. I’d rather assume not, but again. that is an assumption on my part.

                • finley@lemm.ee
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  3
                  ·
                  edit-2
                  4 months ago

                  With all due respect: I’m not reading that.

                  Ya know, I’ve had some great interactions with you here in the past, and generally we’re on the same page, but on this, we disagree. And I doubt we’re going to change each other’s minds, so I’m not really going to waste any more time on this discussion with you.

                  And, I know this is me repeating myself, but i again suggest that you just block the bot and move on. It’s not worth the energy you’re putting into it over a disagreement.

                  Peace, buddy

          • Hegar@fedia.io
            link
            fedilink
            arrow-up
            2
            arrow-down
            1
            ·
            4 months ago

            Just block it and move on already. Your disagreement is hardly worth this crusade.

            That’s not sufficient.

            A private trust assessing company shouldn’t be given free space in an open public forum as though it’s assessments we’re something the general public should be aware of. If you trust it you can go seek it’s assessment off site. But this company shouldn’t be allowed to spam the fediverse of all places.

            • finley@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              4 months ago

              By that logic, no privately owned media company would be able to post links here at all. Because your description pretty much describes all of them too, from the AP to CNN to Fox News.

              And why should you get to set the standards for what everyone else sees? If that’s what you want, start your own instance and ban this bot. But this bot was put in place by the instance admins, and they get to do what they want on their own server. You not liking it or happening to disagree with it gives you no right to tell them what to do.