News AI-generated content and other unfavorable practices have put longtime staple CNET on Wikipedia's blacklisted sources

Status
Not open for further replies.

vanadiel007

Distinguished
Oct 21, 2015
226
218
18,960
It poses an interesting question: if AI is going to revolutionize our future, according to the media reports, why would it be a bad thing to do journalism using AI?

Where would we draw the line on how to use it, and should we even draw a line considering it's supposed to help us?
 
  • Like
Reactions: drajitsh
Nov 3, 2023
29
23
35
It poses an interesting question: if AI is going to revolutionize our future, according to the media reports, why would it be a bad thing to do journalism using AI?

Where would we draw the line on how to use it, and should we even draw a line considering it's supposed to help us?

Which media reports?

Also, as covered in This Article, AI cannot actually test or experience anything it "writes" about for itself. Thus, it can generate a website of over 1000 misleading or incorrect articles that still steal search results from an Actual Human who wrote An Actual Useful Resource. You're asking "why it would be a bad thing" when the reasons are directly in front of you: generative AI functions as an automated content theft machine that is incapable of actually turning around truly high-quality work. And the only times it can even come close, it does so by directly stealing from the livelihood of Actual Human Beings Who Did The Work Instead.

"It's supposed to help us" really doesn't seem to be true at all. The loudest gen. AI fanatics on Twitter are openly hoping to weaponize it against artists and other people who they simply don't want to pay for their work. They want the reward of good art, good 3D, good writing, etc without working to make it themselves or paying an actual person to do it for them. The head of OpenAI is out here saying he wants it to "replace the median human", which sounds utterly inhuman to anyone that isn't a rich tech bro who thinks they're above the rest of humanity.

Artificial Intelligence technology as a whole can certainly have promise in some areas, especially scientific research and such. But "generative AI" is really just a method through which wealthy media companies wish to further devalue the labor of the actual human beings that those companies and even those AIs rely on to function at all, and that's Not a good thing.
 

plateLunch

Honorable
Mar 31, 2017
89
29
10,560
Is Red Ventures a private equity firm? Whenever I hear a story about a reputable company with a good product suddenly turning out junk or in financial trouble, private equity more often than not is behind the decline.
 
  • Like
Reactions: Nitrate55
Apr 1, 2020
1,466
1,132
7,060
Just remember that human writers have long published questionable, misleading, and/or outright false information, sometimes under the banner of a reputable source, long before the internet ever existed, either accidentally, or purposefully, and continue to do so in everything from newspapers to academic articles to experimental reports.

Also remember that much of the content of most any tech site that isn't first party reviews is based upon press releases and reposted/reinterpreted posts from other sources. Take the TomsHardware article on using whey protein to extract gold from motherboards (as it's currently on the front page), there is no source link in the article OR even a mention of what source journal the article was from (very bad form Aaron Klotz), only stating that it was " Scientist Raffaele Mezzenga from the Department of Health Sciences and Technology", not even stating which institution they were from, but if you ask AI, specifically Microsoft Copilot, where it came from, you get the source article https://onlinelibrary.wiley.com/doi/10.1002/adma.202310642 cited in its summary. TomsHardware, and all tech sites, needs to have a source citation requirement in their articles ESPECIALLY if they criticize AI for not doing that, though TH has praised Copilot for citing its sources. Copying information from a source, summarizing it and/or putting it in your own words and/or copying direct passages without citing the source is no different than using generative AI programs that also don't cite their sources and generate an article from it.

As far as C-Net is concerned, I haven't considered them a go-to source since the dialup days, when CBS bought the combined Cnet/Zdnet.
 
Last edited:

rluker5

Distinguished
Jun 23, 2014
635
384
19,260
A long time ago, dialup era, I used to download software from that site. I stopped when the downloads kept installing Chrome and making it my default browser.
 

baboma

Prominent
Nov 3, 2022
205
189
770
>Is Red Ventures a private equity firm?

"Red Ventures, which started as a digital marketing company, has attracted serious investments from private equity firms. Its location has helped obscure what is perhaps the biggest digital publisher in America, a 4,500-employee juggernaut that says it has roughly $2 billion in annual revenues, a conservative valuation earlier this year of more than $11 billion, and more readers, as measured by Comscore, than any media brand you’ve ever heard of — an average of 751 million visits a month..."

https://nytimes.com/2021/08/15/business/media/red-ventures-digital-media.html (sub req)

A good piece on Red Ventures if you have access to NYT.

>Whenever I hear a story about a reputable company with a good product suddenly turning out junk or in financial trouble, private equity more often than not is behind the decline.

It's capitalism at work. As with any economic structure, it has its up and down sides. People tend to liken PE as vultures feeding on the dead and dying, but vultures, as abhorrent as people make them out to be, play a useful role in ecosystems.

Anyway, the real cause of the decline of news isn't because of PE eating up distressed news outlets. It's because of people--that's you and me--not willing to pay for news. That's it. No need to cast about for scapegoats or villains. We are the villains.


As for AI, it's a tool, and it can be useful for writing, journalism or otherwise. For news outlets, AI can boost productivity for journalists, allowing more content. But AI hasn't (yet) reached a level where it can author pieces by itself. That AI pieces are of poor quality isn't an AI issue, but poor use of it.

News outlets, as distressed and starved as they are for funds, will of course push the envelop on AI use, to cut cost when they can. That is to be expected. It's no one's fault. It's just the way things are.
 
  • Like
Reactions: drajitsh

Notton

Prominent
Dec 29, 2023
321
276
560
Anyway, the real cause of the decline of news isn't because of PE eating up distressed news outlets. It's because of people--that's you and me--not willing to pay for news. That's it. No need to cast about for scapegoats or villains. We are the villains.
No, it's not our individual fault that news is dying.
It's very nuanced subject, but the gist of it is they no longer sell a product that the regular (majority) user wants, and the quality of the product has been in decline for decades.

PE's make it worse, because they're only there to extract maximum profit in the shortest amount of time. How do you do that? Fire almost everyone, and sell off all assets. Smaller staff and smaller office means less expenditures.
And there is no law that says you can't buy a healthy business, and hallow it out into a carcass. It's low risk and high reward, assuming you can foot the initial bill.
 
  • Like
Reactions: Nitrate55

vanadiel007

Distinguished
Oct 21, 2015
226
218
18,960
Which media reports?

Also, as covered in This Article, AI cannot actually test or experience anything it "writes" about for itself. Thus, it can generate a website of over 1000 misleading or incorrect articles that still steal search results from an Actual Human who wrote An Actual Useful Resource. You're asking "why it would be a bad thing" when the reasons are directly in front of you: generative AI functions as an automated content theft machine that is incapable of actually turning around truly high-quality work. And the only times it can even come close, it does so by directly stealing from the livelihood of Actual Human Beings Who Did The Work Instead.

"It's supposed to help us" really doesn't seem to be true at all. The loudest gen. AI fanatics on Twitter are openly hoping to weaponize it against artists and other people who they simply don't want to pay for their work. They want the reward of good art, good 3D, good writing, etc without working to make it themselves or paying an actual person to do it for them. The head of OpenAI is out here saying he wants it to "replace the median human", which sounds utterly inhuman to anyone that isn't a rich tech bro who thinks they're above the rest of humanity.

Artificial Intelligence technology as a whole can certainly have promise in some areas, especially scientific research and such. But "generative AI" is really just a method through which wealthy media companies wish to further devalue the labor of the actual human beings that those companies and even those AIs rely on to function at all, and that's Not a good thing.

That is what I was pointing out. AI needs to be trained, so why would we not train it to perform journalism? We can make it "intelligent" to a point where it could perform those functions. Or so the media articles claim.

We already trust it to drive cars for us.
 
Apr 1, 2020
1,466
1,132
7,060
The problem with "training AI" is that the AI will inherit the trainer's (un)intentional views and biases, so a journalistic AI will become as left or right as most news sources these days. At best journalistic AI can only be encyclopedic, compiling and summarizing from multiple, hopefully cited and reputable, sources.

That is of course unless it is AI that views an event with a drone camera and reports -exactly- what it sees.
 
The ironic plot twist, this piece here was AI generated too
That would not be all that surprising...

CNET has been publicly admonished since it first started posting thinly-veiled AI-generated content on its site in late 2022...

...But following November 2023, when Red Ventures began posting AI-generated content to what used to be one of the most reputable tech sites...
So which was it, 2022 or 2023? AI chatbots do tend to be bad with numbers. >_>

Also, could CNET be considered "one of the most reputable tech sites" even prior to that? Maybe if we look back a couple decades the statement could technically be considered true.
 

3tank

Prominent
Jun 21, 2022
35
28
560
Seriously, Wikipedia hands out editorial powers for political purposes and will lock an account so that any libel or misinformation can't be adjusted by the victim and then goes out of its way to protect accounts of fellow travelers.
They're in the same camp as CNET for all anyone should care- actually worse because their actions are willfully.
 
  • Like
Reactions: RandomWan
Status
Not open for further replies.