@apiltch I had a few thoughts about your piece.
It takes into account criteria such as the background of the author, whether that author describes relevant experiences and whether that publication has a reputation for being trustworthy. By definition, a bot can meet none of these criteria.
Sure it can! They can pre-filter its training data and/or bias the training to better favor results from more trustworthy and knowledgeable sources.
This is why, when I go to family gatherings, relatives approach me and ask questions like “what laptop should I buy?” They could just do a web search for that question (and maybe they should), but they know me and believe that I won’t steer them wrong.
This seems to embed two assumptions. The first is about expertise, which I believe AI can address at least as well as any single human expert (remember, we're fallible and have our own limitations?). The second is about conflict of interest, and that could ultimately be more tricky to resolve.
We don’t know whose biases sit behind the bot’s lists of recipes, constellations or anything else. The data could even be drawn from commercial.
Humans aren't immune from undue influences, either. Sometimes, the human isn't even aware of the ways their opinions are being shaped by the forces around them. For instance, if you only review products where the manufacturer sends free samples (and even if this isn't true of Toms, it's definitely true of
some review sites!), then you would be biased towards mass market products and might miss growing trends among little upstarts from the likes of kickstarter.
The
potential exists for AI to be less biased, by pooling data from more sources and even employing sophisticated techniques to detect bias and manipulation. I'm not saying that's the most
likely outcome, but it's plausible. And, as long as we view this as simply a black-and-white issue of humans vs. AI, we miss that sort of nuance and the opportunities to demand greater transparency and controls against bias and manipulation.
chatbots like those shown by Google and Bing are an existential threat to anyone who gets paid for their words
Not sure about that. I think they still need guidance and input. There's certainly a risk of human authors getting devalued or edged out, but I don't think we'll see human authors completely leave the scene, anytime remotely soon.
when the bot says Orion is a great constellation to view, that’s because at some point it visited a site like our sister site Space.com and gathered information about constellations.
Don't exclude those of us who freely provide information in forum posts, on Twitter, Reddit, or old-school Usenet newsgroups and email lists. Paid authors certainly have the biggest influence, but there's a lot of expertise behind some of the non-commercial content people have been posting since the very first days of the internet.
Getty Images is currently suing Stable Diffusion, a company that generates AI images, for using 10 million of its pictures to train the model.
You could likewise argue they'd also have to sue all of the human artists who've seen their work.
Amazon was credibly accused of copying products from its own third party sellers
That was a targeted activity that ripped off specific sellers.
If you sue an AI chatbot for repeating information that it learns on the web, wouldn't you also have to sue all of the humans who do that? Which, I think, probably ends up including most people who read anything on the internet?
You could argue that Google’s LaMDA (which powers Bard) and Bing’s OpenAI engine are just doing the same thing that a human author might do. They are reading primary sources and then summarizing them in their own words. But no human author has the kind of processing power or knowledge base that these AIs do.
Essentially, what you're advocating is locking up human knowledge, so that we can only drink from the pool with the soda straw of our own eyeballs and human brains. I'm not sure that's the right answer. Humans are tool-using creatures and many of our tools have both benefits and drawbacks. Rather that reject tools, what we tend to do is design the tools in a way that makes them safer and easier to use, while reducing their risks or drawbacks. I think we should be thoughtful about how chatbots are designed and used, as well as how monetization works in the knowledge economy. What bothers me is that there's a strong Luddite undercurrent to these objections, and perhaps at least those of us on a tech-focused website can agree that sort of mentality generally hasn't served us best.