"Facebook's Mark Zuckerberg comes under fire as he tells Republican senators his platform 'throttled' users trying to post Hunter Biden revelations be

  • Thread starter Thread starter IanM
  • Start date Start date
Status
Not open for further replies.
My take-away from watching the hearing:
  • It would help if everyone shared a common definition on key terms, especially the word “censorship” or the phrase “election interference.”
  • For clearer questions, don’t attach a lot of statements that of opinion to a question that are not necessary. Just ask the question.
  • Live discussion formats don’t lend well to getting the specific details behind a decision for some arbitrary user moderation event.
I don’t think illuminating discussion was the objective of all participants involved.
A lot of the discussion looked like it was for generating sound bites. But that is not unusual as such discussions go.
 
I agree. There is a lot said and no action taken.
Action is subordinate to a clear problem definition and some way of determining if action is improving or degrading the current status. As things stand now, there is party-line disagreement on whether current moderation is insufficient or heavy-handed. With such incompatible bipolar stances, action is almost guaranteed to be unacceptable to a significant portion of the body of congress.

I think the members of congress also don’t largely have an understanding of AI, ML, and some of the unpredictable or non-explainable behaviours of it. As Jack mentioned, Explainable AI (XAI) is still a field of research. Often times the developers of an AI-based solution are not able themselves to predict how it will act in certain scenarios.

It would be great if the alleged problem could be defined in terms of computational constraints. Once there is agreement on that, then there is a clearer pathway to a solution and if actions are improving or degrading things.
 
Last edited:
It would be great if the alleged problem could be defined in terms of computational constraints. Once there is agreement on that, then there is a clearer pathway to a solution and if actions are improving or degrading things.
I think that I agree, but your use of language is above my pay grade.

The reason I see Congress not agreeing is political. Big Tech is in step with one side and not the other. So, as long as Big Tech has the support of one side, there will never be an agreement.

However, we are allowing multi millionaires to decide what the poor peasant can see and talk about. If they can, and, do block content from other billionaires, how much can they do to the peasant?
 
Last edited:
I think that I agree, but your use of language is above my pay grade.
It just means that it would be good if a problem could be described in terms of math and logic. Ultimately, tasks performed through typical computers are done using logic and math operations. Anything that someone wants a computer to do must be reduced to these.

A simple example, if there are prohibited words, the logic for moderating text content could be as simple as “if a word from this list occurs within a message.” A person examining moderation action on this logic could invariably check it and be certain whether or not. There are no personal value judgements or opinions involved. Both a person and a machine can perform the evaluation and agree.

Unfortunately, language is highly contextual and there are a number of rules that can’t be computationally defined. Ex: many social networks prohibit threatening to harm another person. What calculations and logic can be used to describe that? One could look for the presence of certain words, but that would capture a lot of things that are not threats. There is a well known case of a Japanese man that was locked out of Twitter for tweeting about a mosquito that he was going to end. Also who is tweeting matters. There are instances of people that have had real life threats made against them that have posted the threats they have received and triggered automated moderation.

It is also possible to make a threat without using threatening words. When parties already have a contentious relationship, an otherwise innocuous message could have additional meanings. Imagine of Kathy Griffin sent the tweet to Trump “hold your head up.” Some might think this wording is encouraging. Others, that know of the photoshoot of her holding a fake bloody head of Trump, may see it differently. If her tweet were moderated some may think it appropriate, others may think she was punished for sending an encouraging message.
However, we are allowing multi millionaires to decide what the poor peasant can see and talk about.
Generally speaking, people can talk about anything. These social media networks are not sufficiently necessary for that. But when content is hosted on the Internet, that cost money. On “free” sites the advertisers pay that money, and some have made it clear they don’t want their ads paying for certain content or displayed close to it. Those hosting the ads and the messages have generally responded in the interest of those that pay the bills.

If someone is willing to pay for hosting themselves they will have more freedom in what they can post.
 
The reason I see Congress not agreeing is political.
I get the impression there is asymmetry in the self reporting of moderation. Ex: when YouTube changed the eligibility for ad revenue some creators joined together create their own streaming site, some moved to other services, and some claimed that it was a politically motivated attack. The content affected could politically be classified as more right-leaning or more left-leaning. But there tends to be more coverage of the affected users that are more right-leaning. There have been a few instances of more left-leaning creators that got some publicity, such as the LGBTQ creators that said that the automated moderation had an anti-LGBTQ bias (still being fought in court). It is rare if ever that the wider range of those affected are discussed.
 
It just means that it would be good if a problem could be described in terms of math and logic. Ultimately, tasks performed through typical computers are done using logic and math operations. Anything that someone wants a computer to do must be reduced to these.
That is true, if people where not involved. However math and logic does not take in to account people in charge, and their heavy hand in the system. Thus math and logic can not be used, if people are the ones making the decisions.

Many whistleblowers have come forward and showed the evidence of suppression of political and other speech based on ideology. That is where your reasons have the flaw.

Sure, people can say od do what they want outside of this platforms. But that is not the argument. The argument is free speech within the platforms, and bias censoring.

I ask you this. What do you say about all the whistleblowers that have come forward and showed proof of bias?
 
Last edited:
That is true, if people where not involved.
It’s reported that an overwhelming majority of moderation on these sites are performed by automation.
Many whistleblowers have come forward and showed the evidence of suppression of political and other speech based on ideology. That is where your reasons have the flaw.
Someone can show that they have received moderation, but a person might not be able to tell if that moderation was performed by a human or not or was done for political motivations vs some other value judgment.
I ask you this. What do you say about all the whistleblowers that have come forward and showed proof of bias?
I’ll have to ask you to be more specific. The cases I know of may not be the cases you are thinking about. We also might not have the same perspective on a moderation decision. Also, I haven’t seen “proof” (in the sense of the evidence that brings about a conclusion). I know of incidents for which people have made claims of it being illustrations of political bias.

I should point out that I am a Software Engineer and some of my perceptions on the automated moderation are colored by experiences with using AI for classification.

Ex: Twitter has a rule that you cannot “promote” (as in pay money to purchase higher visibility) a tweet about abortion of pregnancies. It doesn’t matter if the tweet is in favour of abortion or against it. One can tweet on the topic, but Twitter won’t promote it. Despite this rule applying for either leaning or position on the issue, there have been complaints largely from people against abotion that this rule is biased and all decisions based on it are biased.

I know of an instance of a woman by the name of Joyce that made a video and uploaded it to YouTube that was against abortion. YouTube has a rule against bots increasing views on a video. Someone, stating that he was trying to help her, made a bot to increase the views of her video. It resulted in YouTube’s bot detection temporarily removing the video. Joyce thought this was from political bias. In this case, I thought this was one of those moderation decisions that actually is computationally bound and not based on an application of political perspective.

Someone made an anti-abortion advertisement and tried promoting it on YouTube. YouTube has a policy against ads showing certain type of graphic material. The advertisement contained images of dead bodies from a war zone. YouTube removed the advertisement. It was claimed this was from political bias. YouTube says it was because of the dead bodies.

I could talk about the many court cases too, with PragerU v YouTube being the most recent with a 2020 court opinion.
 
Last edited:
It’s reported that an overwhelming majority of moderation on these sites are performed by automation.
okay. Let me provide 3rd grade level language. This is where I can have a conversation.

Lets say that Google, Facebook, Twitter, plus others, decide that the statement. Jesus is Lord, is not allowed on their platform. By automation, any one who says that Jesus is Lord, regardless of intent, will be blocked.

By saying that Jesus is Lord is “offensive” who are they discriminating against? Democrats? Hindus? Latinos? The answer is obvious, they are discriminating against Christians.

Those (name removed by moderator)uts are placed in the computer by people. The AI does not know what is offensive or not. People have to tell the AI what to look for.

Then we have things like the New York Post story that was blocked from the platforms. That was not the AI. That was people.

This companies have a right in the USA to do as they please. AND people as customers have a right to tell the managers that they do not like what is happening. Managers do not act, then we involve the government and show how the rights of people are being suppressed.

This is what it is al about. However, due to the nature of political divide in our country. Democrats do not care because they are suppressing Republicans. Mind you I am talking about our political leaders. For I know democrats that are troubled by this, but this are average people.

You keep asking for proof, and if you do not see the obvious, and you do not accept the whistle blowers, then there is nothing to talk about, because as far as you know, there is nothing. Basically you will be doing what this companies are accused of. Silencing others.
 
Lets say that Google, Facebook, Twitter, plus others, decide that the statement. Jesus is Lord, is not allowed on their platform.
Let’s not. If there is to be discussion on such a provoking hypothetical action, I think it better if it is applied to a hypothetical entity.
Those (name removed by moderator)uts are placed in the computer by people. The AI does not know what is offensive or not. People have to tell the AI what to look for.
Not necessarily. And this is one of the areas of misunderstanding. For Machine learning, a training program is provided with sample data and categories into which the data falls(labels). The machine learning program will extract “features” from the sample data and look for patterns between the data and the labels. The people behind the construction of these entities may not know what those patterns are (The resulting AI is not XAI). They generally test the resulting AI by giving it some test data and seeing if the results are what were expected. The real world is a larger dataset than the test data, and there can be many unexpected results found later.

I presently have an exchange going on with another company that provided me a trained neural network and had me to use the network within a program. In my testing, I’ve found some misclassifications, some of which I can understand, some of which I don’t. One that I do understand is that if the network is provided with a picture of a quadraped on a white background (snow) it classifies the picture as a wolf. If the background is not white, it may be classified as a cat or a dog. Apparently in the training data the pictures of wolves were primarily in snowy environments, and the AI is taking that as a signal that a quadraped as a wolf.

For training moderation, a dataset may be composed of comments that had been evaluated by a human and labeled as either a violation or not a violation. The machine learning program would look for patterns in this text. A side effect that can happen is if text with violations contain certain patterns or words, those patterns or words can become associated with violations even of those patterns or words are themselves not violations. This appears to have happened with the LGBTQ creators on YouTube. Declaring one’s self as LGBTQ is not itself a violation. But there have been a number of people that have mentioned LGBTQ is violating context. Words associated with LGBTQ thus begin to contribute towards the automated moderation disproportionately evaluating LGBTQ content as community violations.
You keep asking for proof, and if you do not see the obvious, and you do not accept the whistle blowers
I see fundamental misunderstandings in how AI works. Largely, it is in the form of people applying what they know in how Imperative Programming works to Machine Learning. In all fairness, ML isn’t a an area that even a lot of people that work with computers understand. I don’t expect there to be a lot of understanding on the topic.
 
Last edited:
If only we had net neutrality laws.
Here, I assume by “net neutrality” that you are referring to enforcement of political neutrality and not to the law of the same named that obligated ISPs to not prioritize network traffic based on data type or the intended host.

I don’t think it would help. There is no agreeable way for determining “neutrality.” Also, online entities are allowed to make value judgements. The same laws that protect these entities also provide CAF. If someone came alone and made statements against the Catholic Church (as happens from time to time), this site is allowed to not be neutral. It has no obligation to blindly allow people to also argue against the Church.
I thought Pier Morgan had a nice write up on Facebook and Twitter ~
That headline may be a bit deceiving. Until the policy was changed, anyone that tweeted the link would indiscriminately find their account locked. That headline makes it sound like they were specifically targeted. Also, anyone that found their account locked can unlock it by deleting the Tweet. They are also now free to tweet the link.
 
Last edited:
With the election in a week, I think the reason for this is transparent.
It is. I wonder how discussion would go if they were actually looking to gather information instead of generating material for a conclusion to promote
 
Status
Not open for further replies.
Back
Top