Avatar

Congressional Virtue Signal on Social Media and Mental Health (Public Board)

by Cornpop Sutton ⌂, A bad bad dude who makes good shine., Sunday, February 04, 2024, 14:30 (17 days ago)
edited by Cornpop Sutton, Sunday, February 04, 2024, 14:36

Specimen: https://www.youtube.com/watch?v=FNaMjWCBdDU

Tedious and stupid.

IMO, social media inherently causes anxiety, fights, hatred, envy, and mental issues and insecurity.

It's the nature of the beast. The more you share of yourself the more you can be subject to attack. The more information you reveal about yourself the more "surface area" of attack you provide someone.

People always tend to be inherently shitty online, and this is a trope that is 30+ years old. It was as true when we only had USENET, Aol and Compuserve.

Congress isn't going to be able to do shit about this. Ban online discussion entirely? I guess it could happen if Dementia Hitler thinks it's a problem.

And if you want strict age vetting of users to protect against trafficking and abuse of minors, Nikki Haley has a swell solution to that - an internet "driver's license."

All these hearings are is a political feel-good to blame successful big company owners for what is inherently the human condition.

Congressional Virtue Signal on Social Media and Mental Health

by IT guy, Monday, February 05, 2024, 22:37 (15 days ago) @ Cornpop Sutton

It's all for show.

They shame these tech leaders publicly for different reasons but never do anything about it.

IMO, social media inherently causes anxiety, fights, hatred, envy, and mental issues and insecurity.

Yeah it's garbage.

Other than forums and youtube, I don't do social media.

It's ruined the dating scene too. I recently watched a video about that. Rather than get their attention fix from one person, many younger people use social media and get attention from lots of different men (or women if a guy is a chad). Getting that attention fix is more important than building an actual relationship.

Avatar

What I don't get

by Cornpop Sutton ⌂, A bad bad dude who makes good shine., Tuesday, February 06, 2024, 01:57 (15 days ago) @ IT guy

How does *anyone* put effective guardrails into social media to definitively prevent predation? That's the demand of the congress critters.

As I see it there are several technological alternatives to solve this:

- AI scanning of all conversations on the platform, across the board.
- An internet ID a la fascistic Nikki Haley.
- Some long assed tedious process where you have to send the site some kind of birth date information such as a birth certificate or driver's license. Basically a one site internet ID.

I suspect it will fall down to AI plus humans who snoop on the most juicy conversations that get flagged.

What I don't get

by JoFrance, Tuesday, February 06, 2024, 20:34 (14 days ago) @ Cornpop Sutton

I don't know what Congress expected to accomplish by doing this again. Its just another dog and pony show and then nothing happens after that.

I agree that social media seems to bring out the worst in some people. It always has. They become FB stars or online bullies or social justice warriors - its all no good. The pedophiles and other creeps out there have a perfect environment because they can hide or blend in.

FB does scan conversations for keywords, but it doesn't seem to understand the context of the conversation. My one nephew is a FB star and had used the phrase "I'd have to kill you" in one of his posts. He was just joking with a friend, but he got suspended for a month because of it.

Maybe AI would improve on keyword searches by considering context, but I don't know if its capable of that.

An internet ID wouldn't make a difference. Everyone already has an IP address and an ISP. If someone is an abuser, you can go to the ISP and they will turn over their identity. You may have to get a subpoena to do it, but they have that information.

One failing of FB is their FB Live. A lot of sickos out there like to broadcast their various crimes, including murder, real time. I think they should get rid of that if they can't police it.

"I suspect it will fall down to AI plus humans who snoop on the most juicy conversations that get flagged."

There isn't another way to control social media. Someone has to snoop.

Censorship

by IT guy, Tuesday, February 06, 2024, 23:42 (14 days ago) @ JoFrance

I think my beef with those congressional hearings was when all of the major social media outlets were censoring views that didn't agree with the mainstream. They'd bring in the tech leaders, the Republicans like Ted Cruz and Ron Johnson would talk down to them and say mean things.....but nothing ever happened.

Same thing happens here although this is a different issue and the solution is censorship. As many downsides as the are with social media is, do we as a society want to go in that direction? With with that bein said, it may not be a bad idea in extreme cases like this:

One failing of FB is their FB Live. A lot of sickos out there like to broadcast their various crimes, including murder, real time. I think they should get rid of that if they can't police it.

Censorship

by JoFrance, Thursday, February 08, 2024, 19:12 (12 days ago) @ IT guy

We should never have censorship of different opinions on social media. Social media sites can decide on certain rules for users of their site and use content checking software to make sure that users follow them, but they should never censor opinions that don't fit the government narrative.

One example is what they did with Covid. They actually blocked opposing information that was true at the request of the government because it didn't fit their narrative. Opposing Tweets were taken down or blocked. I'm so glad it got exposed after Elon Musk bought Twitter. He did us all a great favor. Twitter was a huge loss for the deep state players.

FB needs to do a better job of policing the FB live platform or get rid of it. In this case, it isn't censorship its maintaining human decency in our society.

Avatar

Censorship

by ,ndo, Certifiable!, Friday, February 09, 2024, 05:54 (12 days ago) @ JoFrance

We should never have censorship of different opinions on social media. Social media sites can decide on certain rules for users of their site and use content checking software to make sure that users follow them, but they should never censor opinions that don't fit the government narrative.

I disagree (with the bolded bit).

If someone posts something that is illegal then the platform can inform the police. And they can have a legal department to help with that.

But outside that, a platform should not be making policies. Policy is the domain of the society's government, following the desires of the society. The fact that platforms have taken on policy-making is the reason why we have so many problems today with "social media".

Avatar

Inevitable, that

by Cornpop Sutton ⌂, A bad bad dude who makes good shine., Friday, February 09, 2024, 09:41 (12 days ago) @ ,ndo
edited by Cornpop Sutton, Friday, February 09, 2024, 14:35

But outside that, a platform should not be making policies. Policy is the domain of the society's government,

No, can't work.

Well, I mean IDEALLY yes.

But a platform is a business - therefore in any legal system incurs liabilities due to actions of its customers - extremely difficult to shield the operator from those liabilities.

Therefore, any platform with identifiable ownership will have the necessity of protecting itself and in order to do that will restrict user activities in some manner.

Your statement is idealistic and unrealizable.

I used to (20+ years ago) consider it a personal affront when a sysop would censor one of my posts because I called out some contract broker. I had no idea about lawfare. Then later when I ran a board I got an occasional threat email from some business that objected to a comment like "scam" in connection with their business. I could have stood up to it but forums make you no money and I don't have any reason to back up someone's forgotten post, so...

Believing that a platform should give users unrestricted privileges is kind of like believing that "cloud" means more than someone else's computer. Somehow, somewhere, the buck stops.

Avatar

Inevitable, that

by ,ndo, Certifiable!, Friday, February 09, 2024, 14:41 (12 days ago) @ Cornpop Sutton

I took it as read that a company can take action to protect itself ie make business policy.

Another way of looking at it is US Communications Act section 233 (IIRC). US platforms use it to say "we're a platform not a publisher" to protect themselves from legal liability for the content that they "display" and "not publish" yet they make editorial/publishing decisions on that same content. So the government should be suing their arses off.

But of course lots of things "should" happen that don't. We know all about that.

Avatar

Inevitable, that

by Cornpop Sutton ⌂, A bad bad dude who makes good shine., Friday, February 09, 2024, 18:36 (12 days ago) @ ,ndo

Right. The law's statements are very fine grained. The ability of the system to reckon with them and respect them not so much.

Inevitable, that

by JoFrance, Friday, February 09, 2024, 19:21 (11 days ago) @ Cornpop Sutton

That's exactly right. Businesses have no choice but to restrict some user content because we live in such a litigious society.

Avatar

Past condescension of congress toward tech leaders

by Cornpop Sutton ⌂, A bad bad dude who makes good shine., Thursday, February 08, 2024, 19:35 (12 days ago) @ IT guy

Yah, they can totally go fuck themselves!

I remember the clueless ignorant condescension of individuals such as Ted Cruz and Marco Rubio toward social media figures.

The Democrats and libs have been light years ahead of conservatives in understanding the power of social media.

Avatar

What I don't get

by ,ndo, Certifiable!, Friday, February 09, 2024, 05:45 (12 days ago) @ JoFrance

FB does scan conversations for keywords, but it doesn't seem to understand the context of the conversation. My one nephew is a FB star and had used the phrase "I'd have to kill you" in one of his posts. He was just joking with a friend, but he got suspended for a month because of it.

This is where it's retarded, or malicious. "Everyone" knows that "I'd have to kill you" is a cliche. But what's genuinely malicious is that you don't have to suspend someone for a month automatically just because they say something like that. In a decent world, the platform would ask the user for a "please explain" before taking any actual action. The fact that they don't is proof that they are not decent.

What I don't get

by JoFrance, Friday, February 09, 2024, 18:04 (12 days ago) @ ,ndo

What FB did is really not the right way to handle it. I think they gave him the opportunity to explain but for whatever reason his answer was not good enough for them. I know he's been suspended from FB in the past for other things he posted, so maybe that's why they didn't let it go.

My nephew used to do a comedy type show at a local nightclub and when that got discontinued during Covid, he developed a following on FB. As you know, liberals have no sense of humor, so he has to work around their content checkers.

FB content checkers do not consider that he is joking and that a lot of what he posts are comedy routines. If FB looked at the many posts on his account they would see he is not a killer or a threat to the community.

RSS Feed of thread