Meta is testing a new chatbot… every man for himself

I know that in general when I talk about Facebook and Meta, I tend to be pretty critical. I don’t have much of a dislike for the social network or the company, though it’s true that they’ve spent years trying to put together a particularly negative image. However, on this occasion, it is appropriate to clarify that the security warning in the title is not because we are talking about the project of this particular company … or, well, not really.

And the fact is that the company has launched BlenderBot 3 , a universal chatbot currently only available from the United States (I tried to access it through two VPNs, but that also failed), and this is at least in its definition, tends to offer as a conversation of a general nature, such as those that can be set in a bar at any time, such as answers to questions that are usually asked by digital assistants.

Like all LLMs, BlenderBot was trained on large text datasets to be able to establish patterns that, in retrospect, would be responsible for the responses provided by the AI. Such systems have proven to be extremely flexible and have found wide applications, from generating code for programmers to helping authors write the next bestseller. However, these models also have serious problems, such as developing dataset biases and that when they don’t know the correct answer to a question, instead of saying they don’t know, they tend to come up with an answer.

And here we can say positive things about Meta, since the purpose of BlenderBot is to test exactly a possible solution to the problem of invented answers. Thus, the great feature of this chatbot is that it is able to search for information on the Internet in order to speak on certain topics. And even better, users can click on your answers to see where you got the information from. In other words, BlenderBot 3 can quote its sources, providing incredible transparency.

So the Meta’s starting point is good, at least in theory. The problem is that chatbots as we know them today have two problems. Firstly, its learning is continuous, so it is enough that a large number of users decide to create a malicious bias in the AI, so that if it does not have the necessary elements to avoid it, it will end up Polluted and therefore, So play them.

And the second problem is related to the first, and it is that this type of algorithm works like a closed and opaque box, in the sense that it is not known what is happening inside. Therefore, decision makers depend solely on constantly monitoring the AI, they cannot “lift the hood” to see what is happening there, which makes it difficult and protracted to identify problems and, in many cases, make them impossible. be resolved.

So, this Meta chatbot seems like a step in the right direction, but after so many failures in the past, I admit that I’m pretty pessimistic.

Leave a Reply

Your email address will not be published.