The entire premise that this is a misleading feature is because people don’t understand that AI can be wrong. So let’s remove useful features so dumb people aren’t so easily misled before they even understand what genAI is? Lowest common denominator in humanity wins again I guess. Why not out a warning label that nobody reads? That’s what we do for literally everything because people are too lazy to research things before using them. As it stands the web and google results are a polluted mess of AI generated content that is often wrong, I’d much rather google used AI to sniff out the genAI sites and deprioritize them. It can’t get good at that unless it is getting feedback from humans.
The concern about content creators not being compensated is very real. There needs to be some laws/standards created that specifies what is publicly reusable content for AI systems to ingest/reuse separate from publicly available information for humans to view on a site or indexing agents to crawl. I think many sites run by organizations that simply aim to provide information to consumers would not object to being ingested by google’s AI. Just today I found a technical knowledge base answer and of course still clicked down to the source where it highlighted how it summarized its answer. It saved me time and google stole nothing. So this isn’t a bad thing, its just the laws haven’t caught up yet to protect content creators that don’t want their info being published by google because then they don’t get clicks and ad revenue.