With the inherent biases present in any LLM training model, the issue of hallucinations that you’ve brought up, alongside the cost of running an LLM at scale being prohibitive to anyone besides private-state partnerships, do you think that will allay conspiracists’ valid concerns about the centralization of information access, a la the reduction in quality google search results over the past decade and a half?
With the inherent biases present in any LLM training model, the issue of hallucinations that you’ve brought up, alongside the cost of running an LLM at scale being prohibitive to anyone besides private-state partnerships, do you think that will allay conspiracists’ valid concerns about the centralization of information access, a la the reduction in quality google search results over the past decade and a half?